[gtranslate]
News

Anthropic just had a really messy week

Looks like two random fuck-ups at first. But they actually point to something much bigger, and a bit more unsettling.

The messy week at Anthropic (late March 2026) is a kind of canary in the coal mine for the whole AI industry.

On the surface, it just looks like basic IT mistakes. But underneath it shows something more important: the race to build frontier AI is moving faster than the people and systems trying to control it.

Here’s what actually happened, and why it matters.

Two separate human errors happened within days of each other.

First, the CMS leak. Anthropic accidentally left around 3,000 internal files publicly accessible on its website. Among them were references to an unreleased model tier called “Claude Mythos,” reportedly built on a new internal engine called “Capybara.”

Then came the source code leak. On March 31, a developer accidentally published the “map” to Anthropic’s Claude Code system – its autonomous coding tool – on a public registry.

That exposed more than 500,000 lines of code, effectively revealing how the system thinks through tasks, stores memory, and interacts with a computer environment.

Even if you’re not technical, this matters in three ways.

First, “Claude Mythos” doesn’t look like a small upgrade. It points to a serious jump in coding and cybersecurity capability.

Internal descriptions suggest systems that could potentially find software vulnerabilities faster than humans can fix them. That’s likely part of why it hasn’t been released. Some AI systems may already be strong enough that companies are hesitating to put them out publicly.

Second, the Claude Code leak shows a shift that’s easy to miss: AI is moving from talking to acting.

We’re no longer just dealing with systems that answer prompts. The leaked material describes AI that can run terminal commands, manage memory, self-correct, and complete multi-step workflows with minimal human input. In short: AI that does things, not just explains things.

For users, that gap between “asking AI” and “delegating work to AI” is closing fast.

The reassuring part is that no customer data was exposed. The less reassuring part is how all of this happened: basic operational mistakes. Packaging errors. Misconfigured systems. Human slip-ups.

That’s what makes it more worrying.

Because it shows a mismatch between how powerful these systems are becoming, and how fragile the infrastructure around them still is. Even well-resourced AI labs can still be undone by very ordinary mistakes, and the stakes keep rising.

Zooming out, the deeper issue is simple: the complexity of AI is starting to outrun the systems built to manage it.

Companies like Anthropic are under enormous pressure to ship faster than rivals like OpenAI and Google. That creates constant “high-velocity development pressure” – where speed becomes the priority, and small errors slip through with outsized consequences.

So you end up with cutting-edge AI being built on infrastructure that still breaks in very familiar, very human ways.

That’s the real tension here.

We’re entering a phase where AI systems are becoming extremely advanced, but the machinery around them is still catching up, sometimes dangerously slowly.

And for everyone else, that means two things at once: AI is about to get a lot more powerful, but the way it arrives will probably be messy, uneven, and shaped by mistakes that feel almost surprisingly basic.