This week, Anthropic hit $30 billion in annual recurring revenue. OpenAI sits at $25 billion. For the first time in both companies' histories, the "safety lab" is ahead.
Everyone expected OpenAI to dominate this race indefinitely. It had the brand recognition, the ChatGPT moment, the consumer mindshare. Anthropic was the serious, slightly-earnest younger sibling that people respected but didn't necessarily use. The media narrative for years was "serious people who are losing."
So what happened?
Here's the number that matters more than the revenue headline: Anthropic spent roughly 4x less than OpenAI on model training while generating more revenue. OpenAI is on track to spend $125 billion per year on training by 2030. Anthropic's projected spend for the same period: about $30 billion. Same revenue. Wildly different burn rate.
This isn't a story about AI safety defeating AI capability. It's a story about deliberation defeating speed.
The sprint problem
OpenAI's move-fast approach produced ChatGPT, which was a genuine shock-and-awe moment. Nothing about that is diminished. But sprinting at that pace has a cost: architectural decisions made under pressure, safety reviews compressed, training runs optimized for benchmarks over real-world performance.
Anthropic, by contrast, was slow in ways that looked like weakness. They took longer to ship consumer products. Their API was pricier. They talked about safety in ways that sometimes felt like a fundraising talking point. And yet here we are.
The enterprise customers — the ones who spend $1 million or more per year — didn't read the media narrative. They tested the models. They ran them on their actual problems. They noticed something: Claude was more reliable on the stuff that mattered. Not necessarily faster on benchmarks, but more consistent on real tasks where getting it wrong is expensive.
Anthropic now has over 1,000 enterprise customers spending $1M+ annually. That number doubled in less than two months. These aren't consumers who impulsively switched because of a viral tweet. These are procurement teams and CIOs who evaluated models, deliberated internally, and chose the one that performed under pressure.
The sprint got ChatGPT to 100 million users. The deliberation got Anthropic to $30 billion in revenue. One of those is a better business.
What this pattern looks like at every scale
Here's why this is interesting beyond the business school case study: this exact dynamic plays out at the decision level too.
When you need a quick answer, a single fast model is fine. Ask it what the capital of France is. Done. No deliberation required.
But when the stakes are higher — a hiring call, a strategy pivot, a contract term you're not sure about — the sprint answer is often confidently wrong in ways that are expensive to fix. Not because any individual model is bad, but because no single model catches its own blind spots.
This is the same reason Anthropic caught up to OpenAI. The careful architectural decisions — the ones that took longer, required more internal debate, more review cycles — created a model that performs more reliably on hard problems. The training process was, in effect, more deliberate.
When you run a multi-model council on a hard question, you're doing the same thing at the decision level. Claude argues one direction. GPT-4 pushes back. Gemini surfaces an angle neither saw. The disagreements aren't noise — they're signal. They're exactly the kind of internal debate that Anthropic ran for years while OpenAI was sprinting to ship.
The revenue chart is just the deliberation pattern made visible at massive scale.
The distillation problem, and what it reveals
One more piece of this week's news worth noting: OpenAI, Anthropic, and Google announced they're working together through the Frontier Model Forum to stop Chinese AI companies from distilling their models. The short version — DeepSeek, Moonshot AI, and MiniMax allegedly used 24,000 fraudulent accounts to run 16 million conversations with Claude, then used those outputs to train cheaper knockoffs.
The interesting thing about model distillation isn't the theft. It's what survives and what doesn't.
When you distill from Claude, you get outputs. You get the words, the surface-level reasoning, the general style. What you don't get — what can't be captured by copying outputs — is the deliberative process that produced those outputs. The safety training, the alignment layers, the multi-stage review that shaped how Claude reasons under pressure. Those come from Anthropic's internal methodology. They live in the training process, not in the final answers.
A distilled model is like a student who copied all the homework without attending any of the lectures. The surface looks similar. The reasoning breaks differently when you stress-test it.
This isn't just a concern for regulators. It's a useful frame for anyone using AI on anything that matters: the output isn't the whole product. The process that produced it matters. And that process — whether at the company level or the individual decision level — is exactly what gets lost when you compress it.
What to actually do with this
If you're using AI for decisions where getting it wrong has real consequences, the Anthropic-OpenAI story is a useful data point. The methodology that won at $30 billion scale is the same one that catches errors in individual decisions: deliberation over speed. Multiple perspectives over one confident answer.
You don't need to run a multi-billion-dollar AI lab to access this. You can do it on a single question in about ten minutes: present your problem to Claude, then to GPT-4, then to Gemini. Read where they agree. Read where they push back on each other. The disagreements are where the interesting reasoning lives — and usually where the expensive mistakes hide.
That's the whole idea behind an AI council, by the way. Set up the debate, run the deliberation, get back a synthesis with the minority opinions preserved. It's slower than asking one model once. It's also more likely to catch the thing you'd regret missing.
The careful lab just beat the fast one. That's not an accident. And it's not a one-time thing.
Try it free — no signup. shingik.ai