← Back to Blog

The Anthropic Paradox: Their AI makes you worse at coding, and that's why they need you to keep using it

The Anthropic Paradox: Their AI makes you worse at coding, and that's why they need you to keep using it

Anthropic published a study showing developers who use AI coding assistance score 17% lower on skill mastery (their ability to solve coding problems independently, without AI assistance). The same week, VentureBeat reported that Anthropic's Claude Code security tool found over 500 high-severity vulnerabilities in production open-source codebases.

Most people read those as two separate stories. They're the same story.

The skill gap nobody wants to advertise

The 17% figure isn't a rounding error. It's a systematic effect. When you use AI to write a function, you skip the struggle where actual learning happens: the part where you reason about edge cases, recall similar patterns, and make design decisions. The AI writes the code and absorbs the mental work that comes with it.

This shows up in a predictable place: security. Security bugs aren't usually exotic. They come from misunderstanding data flow, trusting input that shouldn't be trusted, not thinking through failure cases. These are exactly what you stop thinking through when an AI handles the implementation.

The developers most likely to introduce the 500+ vulnerabilities Claude Code found are the same developers who've been using AI coding tools without understanding what those tools are generating.

I'm connecting two data points that weren't designed to connect. The study doesn't track who wrote vulnerable code, and Claude Code didn't survey whether those codebases were AI-assisted. But the pattern is consistent with what the research would predict.

The business model hidden in plain sight

Anthropic publishes research showing AI coding tools reduce skill mastery. Then Anthropic announces Claude Code Security, a product that finds the holes created by developers with degraded skill mastery. They are, in a sense, selling you the antidote to a problem their other product helped create. Static analysis tools like Semgrep and Snyk have done automated vulnerability scanning for years. What's different is that Anthropic's own study now explains why there's more to find.

I'm not saying this was planned. I don't think Anthropic ran the numbers and decided to manufacture a security crisis. But the incentive structure deserves scrutiny. When a company profits from both the productivity tool and the security audit tool, it has very little reason to fix the underlying skill erosion problem. A world where developers are highly productive but increasingly dependent on AI is a world that permanently needs Anthropic's products.

The question isn't whether Anthropic intended this. It's whether they'll ever have a financial reason to reverse it.

Yes, this applies to most security vendors. Antivirus companies profit from the viruses they help you block. What's different here is that Anthropic published the evidence themselves. Most companies don't hand you the study showing their product has a measurable downside. That's either unusually honest, or unusually confident that you'll keep buying anyway.

What vibe coding actually costs

Towards Data Science put it plainly in a piece on "vibe coding's security debt": when developers stop understanding their code, security review becomes the last line of defense instead of one layer in a secure-by-design system. And when that last line is also an AI tool running scans and suggesting fixes, you've built a system where nobody involved actually understands the code.

A Dev.to article asked why AI can build an app in five minutes but deployment still takes two hours. The answer points at the same gap. The AI handles what it can see: syntax, patterns, boilerplate. It doesn't see infrastructure assumptions, secret management, permission models, the organizational context that determines whether a deployment is safe. That knowledge lives in human developers. Or it used to.

Vibe coding doesn't create bad security. It creates developers who don't know which security questions to ask.

What this means if you're writing code today

I use Claude Code every day, and I've noticed the temptation to accept generated code I don't fully understand. Last week I accepted generated middleware for handling request validation. It worked, passed my tests, looked clean. I rewrote it anyway out of mild paranoia, and found it was silently swallowing a class of malformed input instead of rejecting it. Code that works is not the same as code you understand.

The developers who will stay relevant aren't the ones who use AI most aggressively. They're the ones who use it as an accelerator while keeping genuine understanding of what's being generated. That means actually reading the output, pushing back when something doesn't make sense, and occasionally writing things by hand, not because it's faster, but because doing it yourself is how you stay calibrated.

The 17% skill reduction isn't a reason to stop using AI coding tools. It's a reason to use them differently.

Code review and security training still matter. Using AI carefully isn't a substitute for either. The point is that good process gets harder to maintain when nobody in the room fully understands what the code is doing.

Anthropic's research is a warning from an unlikely source. The company selling you the AI coding assistant is telling you it has a measurable downside. That's worth taking seriously, even if their next move is to sell you the tool that catches the problems.