Yesterday Anthropic’s Claude Code source leaked. Within hours, all of LinkedIn had an opinion. Screenshots got dissected. Hot takes got posted. People who’ve never shipped an AI product in their lives suddenly had strong feelings about internal code quality.
But almost everyone focused on the wrong thing.
The response nobody expected
Boris Cherny, the creator of Claude Code, posted this on X:
“It’s never the individual’s fault. It’s the fault of the process, the culture, or the infrastructure.”
No finger-pointing. No “we’re investigating who did this.” No carefully worded corporate statement about “taking appropriate action.” Just: this broke, here’s how we’re going to fix it.
That’s it. That’s the whole statement.
And honestly? That single sentence tells you more about how Anthropic operates than anything in the leaked code ever could.
The skeptic’s question
The obvious pushback writes itself: “Wait, you guys say Claude writes 100% of the code, but now suddenly it’s a human error?”
Yes. Both things are true at the same time.
Behind every AI agent writing code, there’s still a developer. And behind every developer, there’s a process that either catches mistakes or doesn’t. The agent doesn’t deploy itself. It doesn’t configure its own permissions. It doesn’t decide what gets committed to a public repo. Humans do. Processes do. Infrastructure does.
The interesting question was never “who screwed up.” It was “what in the system allowed this to happen, and how do we close that gap.”
Why blame kills velocity
Here’s what’s happening in most engineering orgs right now. Teams are moving faster than ever. AI is accelerating everything - prototyping, shipping, iterating. The speed is genuinely unprecedented.
But speed comes with a cost. You will make mistakes. Things will break. Code will leak, deploys will fail, databases will go down. That’s not a bug in the process of moving fast. That’s a feature.
The moment you start punishing individuals for mistakes, you don’t get fewer mistakes. You get fewer people willing to try.
Engineers stop experimenting. They stop shipping risky features. They stop pushing boundaries. They start optimizing for not getting blamed instead of optimizing for building something great. Every decision gets filtered through “what happens to me if this goes wrong” instead of “what’s the best thing I can build.”
That’s how organizations stagnate. Not from a lack of talent. From a culture that makes talent afraid to move.
Process over people
The teams that ship the best software aren’t the ones that never break things. They’re the ones that have systems in place to catch breakage early and recover fast.
Blameless postmortems. Not “who did this” but “what let this happen.” Every incident becomes a process improvement, not a performance review.
Guardrails, not gatekeepers. Automated checks, sandboxed environments, permission boundaries. Systems that prevent mistakes before humans get a chance to make them.
Psychological safety. The boring, overused term that somehow still gets ignored everywhere. If your team is afraid to say “I broke something,” they’ll hide problems until those problems become catastrophes.
The Claude Code leak will be forgotten in a week. But the culture that produced that response - blame the process, fix the system, protect the people - that’s the thing worth paying attention to.
How your team handles the crisis matters infinitely more than the code that caused it.