Sora Is Dead. The Decision Behind It Deserved More Than One Answer.

OpenAI killed Sora last week. Not a soft deprecation — a full shutdown. The app, the API, the Sora.com domain — all gone by March 24, 2026. The $1 billion Disney licensing deal died with it. That money never changed hands.

The numbers tell a blunt story: inference was running $15 million per day. Downloads peaked at 3.33 million last November, then fell 66% to 1.13 million by February. Total lifetime revenue from the app: $2.1 million. You don't need a spreadsheet to see the gap.

Sam Altman made the call: kill the product, free up compute, redirect the Sora team toward world simulation for robotics. The bet is that the underlying technology is worth more in manufacturing and logistics than in consumer video.

Maybe that's right. Maybe it isn't. That's kind of the point.


The Decisions That Deserve More Than One Opinion

There's a category of decision I think about differently from most queries. When someone asks an AI "what's the best way to format a CSV?" — one model's answer is fine. The cost of being wrong is approximately zero.

But when the question is "should we kill this product we've spent two years building?" — the cost of being wrong is not zero. It's irreversible. It's directional. It commits resources for years and kills relationships longer.

These are exactly the decisions where a single model giving you one confident answer is actually a liability.

Not because the model is wrong — it might be completely right. Claude might say "kill Sora, the economics don't work" and be correct. But confident single-model advice on an irreversible bet smuggles in a specific set of assumptions, weights, and blind spots you don't even know you're inheriting.

The Sora decision had at least five real tensions inside it:

  • Short-term economics vs. long-term positioning. The numbers were brutal, but consumer AI video isn't necessarily a dead market — just an expensive one in 2026.
  • Kill vs. pivot. Could the technology survive in a different form without the consumer app? (Turns out yes — the robotics redirect suggests Altman thought so too.)
  • Optics of shutting down vs. optics of bleeding slowly. Both look bad. They look different kinds of bad.
  • Team morale. Killing a flagship product is demoralizing. Redirecting to a harder, longer-horizon problem is… also demoralizing, in a different way.
  • Partner fallout. The Disney deal was collateral damage. One billion dollars on the table, gone.

Ask one model for advice on this decision, and you get one weighting of these tensions. That weighting comes with a perspective you didn't choose — and can't fully audit.


What Deliberation Looks Like on a Decision Like This

Here's what I've noticed when you run a real AI council on a kill-vs-continue question: the models don't just disagree — they disagree usefully.

One model anchors on the economic math and says pull the plug immediately. Another pushes back on the timeline ("the consumer video market isn't dead, it's early — you'd be exiting at the bottom of the trough"). A third raises the question nobody asked aloud: "What happens to OpenAI's position in creative tools if they abandon video entirely? Who fills that space?"

You get the debate you'd have in a good board meeting, compressed to minutes. Without the politics. Without whoever's the loudest voice in the room winning by default.

The output isn't a single confident answer — it's a map of the real disagreements. And that map is what you actually need before making a $15 million decision.

The signal that you've done it right is when the council surfaces something you hadn't considered. In a Sora-type scenario, that might be: "The market position question is actually more important than the cost question. The cost can be managed. Ceding video to competitors is permanent." Whether you agree with that or not, you want that argument on the table before you commit.


The Heuristic for "This Needs a Council"

I've landed on a simple test. You need deliberation, not just automation, when any of these are true:

The decision is hard to reverse. Killing Sora is irreversible in practice. You can't un-kill a product — the team disperses, the brand moves on, the ecosystem adjusts. When you can't easily undo it, stress-test it first.

The cost of confident wrong is asymmetric. Getting a Python question wrong: annoying, five minutes to fix. Getting a strategic pivot wrong: years of misdirected energy and a dead billion-dollar deal. Let the asymmetry in consequences drive asymmetry in how carefully you decide.

Multiple legitimate values are in tension. The Sora call involved economics and strategic positioning and team morale and brand perception. These don't resolve neatly into one answer. A single model picks one weighting; a council surfaces all of them and makes the trade-offs explicit.

You're committing resources for a long time. The robotics bet will play out over years. That's worth an extra ten minutes of deliberation before you commit.


The Pattern Repeating Itself

The Sora shutdown isn't a one-off. We're entering a phase where AI products are making calls that previously required a room full of executives — which features to kill, which bets to double down on, where to redirect compute. These calls are being made faster, with less institutional overhead, and by teams who have AI advisors available around the clock.

The temptation is to use AI the way you'd use a search engine: ask it, get an answer, move on. That's fine for low-stakes, easily reversible questions. For those, automation is the right call.

But for the other kind of question — the kill-vs-continue calls, the bet-the-year pivots, the things you can't walk back if you're wrong — you want deliberation baked in before you commit.

Not because AI models don't know things. They know a lot. But knowing things and weighing a genuine tradeoff well are different cognitive tasks. The first favors confidence. The second favors argument.

An AI council is built for the second kind. You bring the decision, the models debate it, and you get a map of the real tensions before you sign off. That's not a slower process. It's a more honest one.


Try it free — no signup. shingik.ai