Security researcher Chaofan Shou tweeted yesterday morning that the source code for Anthropic’s Claude Code agent had leaked: [Twitter, archive]
Claude code source code has been leaked via a map file in their npm registry!
Anthropic included a source map — a debugging file — in the Claude Code NPM package. This can turn the minified code in the package back into the original source code.
The leak was 1,900 files with 512,000 lines of code.
Anthropic said: [Bloomberg, archive]
This was a release packaging issue caused by human error, not a security breach.
The AI cannot fail — it can only be failed. Human errors can also be security breaches. If your release system makes this sort of error possible at all, that’s on you.
But then, Anthropic vibe coded this system. What do we expect.
Everyone’s been looking through this code dump to see what it does. Duke of Germany on Mastodon said: [Mastodon]
After looking at the code, my understanding of how Claude works: “Throw insane amounts of compute at some developer fan fiction and hope for the best.” Did I get that right?
Yep, that’s about right. There’s bits of actual code in there. But most of it is prompts that plead with the bot not to screw it up this time.
Claude Code’s creator, Boris Cherny from Anthropic, tweeted last month that Claude code is vibe coded: [Twitter, archive]
Can confirm Claude Code is 100% written by Claude Code.
That puts Claude Code’s copyright status in serious doubt. You cannot copyright AI output in the US.
Anthropic is sending DMCA notices to get copies of the repository taken down. Claiming copyright on uncopyrightable material is fraudulent, and it’s perjury if you do it in a DMCA notice. If you get one of these, you might want to counterclaim accordingly. [WSJ, archive]
Also, whatever code the chatbot originally stole from is likely under a variety of other licenses. So Anthropic may have violated those copyrights.
Of course, a pile of free vibe code is worth less than zero as code. The only use for this pile is working out what nonsense Anthropic thinks is production machinery.
- There’s an instruction not to write any security holes. I’m sure that works great.
- You can’t use Claude Code to write hacking tools! Unless you tell it you’re a security researcher. Then it’s happy to help.
- There’s an “undercover” mode, which you use when you want to send slop to a public project without them realising you’re using a bot. This is specifically for use against public projects. Anthropic knows what they’re doing here. This is reason for projects that bar AI to bar all Anthropic employees.
Claude Code sends all your stuff to Anthropic: [Register]
“I don’t think people realize that every single file Claude looks at gets saved and uploaded to Anthropic,” the researcher “Antlers” told us. “If it’s seen a file on your device, Anthropic has a copy.”
Can you take this code leak and run Claude Code locally, without paying Anthropic? Sure, just point it at a local model instead of the Claude API. It’ll be super-slow unless you spend enough money to match the performance of the Claude API. But I’m sure there are a lot of people who are trying just that thing right now.
In the past few months, we’ve seen a slew of formerly respected software engineers who try the bot, and it one-shots them, and they start posting 2000-word tweets about how awesome Claude Code is, it’s the future of coding, don’t be left behind! And they never show you testable numbers or anything. Trust me, bro.
People who’ve been forced to touch Claude Code at work tell me it’s noticeably more sycophantic than older models. Claude Code really wants to make you feel good about vibe coding.
But also, Claude Code is leaning hard into gambling addiction — the “Hooked” model. You reward the user with an intermittent, variable reward. This keeps them coming back in the hope of the big win. And it turns them into gambling addicts.
Jonny from Neuromatch describes how Claude Code works, looking at the codebase: [Mastodon]
This is an important feature of the gambling addiction formulation of these tools: only the margin matters, the last generation … The intermediate comments from the LLM where it discovers prior structure and boldly decides to forge ahead brand new are also part of the reward cycle: we are going up, forever. Cleaning up after ourselves is down there.
Jonny compares Claude Code to exploitative pay-to-win mobile games. Addiction loops. Anthropic’s gamified vibe coding.
Claude Code is expensive Candy Crush, but it tells you you’re being productive. As it teaches you to forget how to code. Just keep paying Anthropic.
Remember: every day is AI Fool’s Day.




