Anthropic's Claude Code's full source code leaked. Claude is seen by many to be the best coding LLM on the market with Anthropic proudly stating that Claude Code itself is mostly written by the LLM.

Now this sounds good as long as nobody can see the code which is quite the trash fire. Detecting "code sentiment" via regular expressions, variable and functions names containing prompt parts trying to influence the bot, a completely intransparent mess of a control flow that makes actual maintenance and debugging functionally impossible and the prompts ... of the prompts. All the begging and pleading to the chatbot not to do this or not to do that or please to do this.

It is fascinating but it is as far away from actual engineering as drunkenly pissing your name in the snow. Dunno what you call the people prompting software at Anthropic but "engineer" is not it.

Now it is fun to look at the currently hyped product striped bare and showing its pathetic quality but that is the future of software if we let those companies continue to undermine every good practice software engineering has tried establishing.

The software we have to use will be bad, insecure, unmaintainable, expensive with nobody having the skills or resources to build something better. As I wrote a few months ago: LLM based software production is equivalent to saying that fast fashion should be the only way to produce clothing. A tragic degeneration of the quality of the artefacts we rely on build for maximum profit on the backs of people in countries from the global majority.

in reply to Rens van der Heijden

@Namnatulco
This Reddit thread is a good example. A dev posts some visible problems with the leaked code — some problems more serious than others, but just about everything in there counts as dubious “code smell”, sketchy team/organization culture, or both, at a minimum.

The general response? Blowing it off. Stating that the person posting this doesn’t know what real software development is like. Reinforced by an AI-generated summary at the top of the thread, emphasizing the sentiment just described, and generally belittling the concerns raised here.

(TL;DR summary of my own experience with enterprise software development, going back almost 30 years: the quality of what’s in the Claude Code leak is, at the absolute best, definitely on the “very dysfunctional” side of “typical”. I’ve seen lots of dubious code, outdated code, tech debt time bombs as code, and just plain crap code; I have *not* routinely seen codebases for major applications that look quite like this.)

reddit.com/r/ClaudeAI/comments…

in reply to Rens van der Heijden

@Namnatulco software engineers are in general excellent at criticism - the fast iterative work flow makes this skill more useful than in say surgery. But asking them about code their group wrote will get the answer that it is terrible. But IME the problems are more often with conditions than code itself. A lot of times management won't pay for the right way, and a lot of times management won't years of tech debt is hard to maintain, but quality is higher than the people say.
in reply to tante

Salesforce has Agentforce Vibes and if you have access to it you can see a lot of the prompts they wrote to guide the LLM. There's very little to stop you from copying it all.

It's the same with a lot of their agentic offerings, you can look under the hood and customise how it works. All they're selling you is a prompt library and the ability to run the LLM on their infrastructure.

salesforce.com/agentforce/deve…

in reply to tante

Ever since the Industrial Revolution, capitalism has been thriving on producing bad quality for cheap, and then selling that cheap disposable shit to everyone so they have to throw it out and replace it when it breaks. Before the Industrial Revolution, people didn't have many pieces of clothing, but their clothes lasted for decades. Then mechanical looms produced so much cloth for so cheap that weavers lost their livelihoods, even though their artisanal cloth was far superior. All of a sudden, many people could afford to buy a lot of new clothes, but then they had to keep buying them because the fabric didn't last as long as it used to.

Making things cheap, shoddy, disposable, has always been the general direction of industrial mass production. If it's cheap enough, who cares if it breaks after a while, just buy a new one.

First it was textiles. Then it was all kinds of consumer goods. After WW2, everything became increasingly disposable. Nowadays even entire washing machines are often made out of plastic so that they break after a decade. Automated factories don't care if their products are of worse quality than the things that came before, they just dump their cheap shit onto the marketplace until the competition crumbles.

in reply to tante

... the scaffolding around a natural language engine isn't going to look like oldstyle heuristic code. That's what all those highly paid "prompt engineers" have been doing.

Although it clearly eases the pain for people to come together and ritually denounce AI and all that sail in it, the fact is coding AIs work very well and aren't going anywhere even when the "add clippy to everything" bubble bursts.

Just bear in mind it's widely used, but quietly on masto due to all the pitchforks.

Kotes reshared this.

in reply to Dahie

@dahie

Fast - There are no performance issues noted

Efficient (By definition: Efficient code is
software that achieves its intended functionality using the minimum necessary computational resources)
In fact it does exactly that. The "Hahaha Regex" folk are condemning it for being efficient. Instead of kicking off an Ai sentiment check, it checks for 100+ words. That's super efficient.
The JSON blah blah is just.misinterpretation of what the code does, its a fallback error handling.
Other than that I have not heard any specific "inefficiencies"

Reliable - The compounding net correctness is outpacing the compounding errors. Claude code is reliable and increasing so, as evidenced by its adoption (not amongst the forest folk of course).

Secure - Once again, in the initial reporting, I have not seen any security holes. The software source was a human error.

Maybe I missed something in the analysis so far.
If you can point me to any breaches of those 4 criteria, I'd happily read more.
No doubt, we will find out more soon.

Edit: A super quick look says there is SOME merit in your claims (although stretching), Security is rock solid though. I'll have to evaluate fully the others (they are tenuous IMHO)

#claudeleak #claudecode

This entry was edited (6 days ago)
in reply to Robin Syl 🌸

FWIW, people that create with ai should be aware, they aren't protected if they plan on doing something more with the code they generate with AI.

Also, there is an argument to be made to leave it up since the courts came out and said machine generated creations aren't copyrightable and the US Supreme Court refused to take up the appeal with the courts finding that the Copyright Act Copyright Act “protects only works of human creation” and “requires all eligible work to be authored in the first instance by a human being.” @tante

For law buffs, the cites are here:

Thaler v. Perlmutter, 687 F. Supp. 3d 140 (D.D.C. 2023)

Thaler v. Perlmutter, 130 F.4th 1039 (D.C. Cir. 2025)

This entry was edited (1 week ago)
in reply to tante

Whenever I buy fast fashion, I buy it expecting it to fall apart because I'm broke. When it *doesn't* it's an unexpected surprise, a production glitch.

A future where youll be saying "Oh wow, my banking app/age verification app/bluetooth-integrated blood sugar monitor app *didn't* shit itself and delete all my saved data/log me out of my accounts/publicly leak my personal data to the internet today!" is... not quite the impressive feat that OpenAI & Co think it is. 🥲

tante reshared this.

in reply to tante

Funny thing, I released a lengthy blog post over the weekend trying to work out the start of how we actually solve (instead of defining away) the Software Crisis, because once people start finally putting the pieces together (botched schedules, slower work, error-riddled work, deskilling, etc.) and the bubble bursts, it's going to cost SO MUCH to get things into shape after thirty years of Crisis packed into twelve months.