Two AI-agent conferences in San Francisco wrap today. AI Council 2026 closes its 10-track program at the Marriott Marquis at 2:00 PM PT. Interrupt 2026 — LangChain's enterprise-agent conference — wraps at The Midway at 4:00 PM PT, after Harrison Chase's second-day keynote, an Andrew Ng fireside, and two days of production case studies from Apple, Lyft, LinkedIn, Toyota, Coinbase, Clay, Rippling, Workday, and Honeywell. About 3,000 practitioners fly out tomorrow.

So I want to close out the three-day arc cleanly. Day 1 was the conference open and the OpenAI Deployment Company launching at $4B in committed funding. Day 2 was the two-conference overlap day, Capgemini's first named follow-on putting the entity-level valuation at $14B, and Cristiano De Nobili's statistical-physics critique of naive multi-agent consensus. Day 3 — today — is the wrap, and three things changed in the last twenty-four hours that sharpen the read.

Take them in order. Two are news, one is a paper, and the paper is the one to read.

The orchestration tier just acquired annuity-style economics

Yesterday's framing of the OpenAI Deployment Company was straightforward: $4B committed funding, $14B entity-level valuation, that's a 3.5x multiplier and it prices the alliance richly. Today's framing is sharper.

SiliconANGLE, Axios, and SQ Magazine all confirmed the structural terms: external backers are getting a 17.5% guaranteed minimum return, with a profit cap on the upside. That's not venture-style. That's annuity-style — bond-like risk on a capped upside, with OpenAI carrying the downside. The alliance is being priced as utility-grade enterprise-AI infrastructure, not as a growth bet on uncertain consulting margins.

And the consolidation is moving fast. Bain & Company — the consulting firm, distinct from co-lead Bain Capital — published its own investor press release yesterday, becoming the second named follow-on after Capgemini's May 12 release. Two named follow-ons in 48 hours. The Frontier Alliance structure (Accenture, BCG, McKinsey, Capgemini per OpenAI's February announcement) sits alongside DeployCo as the institutional consolidation vehicle.

OpenAI's CRO Denise Dresser gave the cleanest pull quote of the launch coverage: "AI is becoming capable of doing increasingly meaningful work inside organizations. The challenge now is helping companies integrate these systems into the infrastructure and workflows that power their businesses." That's the orchestration-tier admission, said plainly by the seller.

Inside the alliance: McKinsey, Bain & Company, Capgemini, Goldman, BCG, Accenture. Outside: TCS, Infosys, HCL, Wipro, Persistent. The Nifty IT index dropped 3.6% on Tuesday. Routing, integration, and human-deployed orchestration are being institutionally rolled up at the $14B SKU.

What's not being rolled up: a primitive that extracts a decision from multiple models reasoning about the same question. That's deliberation, and it sits on a different layer.

The math paper that explains why six strategies, not one

While the consolidation news was propagating, a paper landed on Monday that almost no practitioner has staked yet, and it's worth more attention than it's gotten.

arXiv:2605.11453, Ethan David James Parks and Dalal Alharthi, University of Arizona, submitted May 12. Title: Predictive Maps of Multi-Agent Reasoning: A Successor-Representation Spectrum for LLM Communication Topologies.

Here's the setup. Practitioners deploying multi-agent LLM systems currently pick between chain, star, mesh, and richer communication topologies with no pre-inference signal for which one will fail on which question. Parks & Alharthi propose modeling the communication graph as a row-stochastic operator P, then computing the successor representationM = (I − γP)⁻¹, a classical reinforcement-learning object that captures expected discounted future visit counts through the topology.

Read off three spectral quantities. The spectral radius ρ(M) controls drift amplification — whether reasoning errors compound through the topology. The spectral gap Δ(M) controls consensus convergence — how fast the system agrees and whether that agreement is stable. The condition number κ(M) controls brittleness — how robust the topology is to a single noisy agent.

The paper derives closed-form spectra for the three canonical topologies (chain, star, mesh) under row-stochastic normalization, and validates the predictions empirically on a 12-step state-tracking task with Qwen2.5-7B-Instruct across 100 trials per topology.

This is the first paper in 2026 to give multi-agent practitioners a pre-inference answer to "which topology should the council use for this question." It's also the cleanest mathematical justification we've seen for why a product like Shingikai has six strategies and not one.

Different questions have different spectral profiles. Different topologies amplify different failure modes. The naive assumption — more communication equals better deliberation — is wrong: more communication can amplify drift through the spectral radius even when it shortens the consensus path through the spectral gap. The right amount and shape of communication is task-dependent, and the spectral diagnostic tells you which it is before you run inference.

Mapping the spectral failure modes onto strategy selection

Walk the three failure modes onto a council strategy menu and the picture gets clean.

Drift amplification (spectral radius) is what Survivor strategy is designed against. Survivor eliminates the weakest reasoning early, before divergent threads compound through later rounds. In spectral terms, Survivor cuts the operator before its radius can do damage.

Consensus convergence pathology (spectral gap) is what Round Robin and Traditional Council make explicit. Fixed-round iteration is a controlled-convergence-rate topology — you decide in advance how many rounds the gap gets to close. Chairman synthesis at the end audits whether the convergence was productive or just sycophantic.

Brittleness under perturbation (condition number) is what Red Team vs. Blue Team specifically guards against. Adversarial role differentiation across heterogeneous frontier models makes the system more robust to a single noisy agent, not less — the dissenter is structurally protected, and a single bad reasoning pass can't dominate.

Worth being specific about the regime. Parks & Alharthi measured a single small open-weight model. A heterogeneous frontier deployment — Claude / GPT / Gemini / Grok across four distinct training lineages — is a higher-order regime. The spectral diagnostic still applies; the failure-mode amplitudes are smaller because the prior diversity is larger. Heterogeneity reduces effective spectral radius; structured disagreement widens the effective spectral gap.

Pair this with De Nobili's May 11 statistical-physics critique (arXiv:2605.10528). De Nobili names the failure mode for same-model-class consensus on minimal prompting: amplified single-agent opinion. Parks & Alharthi give the spectral math for which topology amplifies that failure. Together they're a complete diagnostic — name the failure, then predict the architecture that produces it.

Karpathy gave the verifiability-bottleneck framing at Sequoia Ascent 2026 last week: "Even for writing, you can imagine having a council of LLM judges and getting something reasonable." That's the motivation. The De Nobili / Parks & Alharthi pair is the architecture.

Three signals, one read

Stack the three days. The orchestration tier consolidated this week with bond-like economics — two named follow-on consulting partners in 48 hours, 17.5% guaranteed return, profit cap. The practitioner tier convened in San Francisco for three days to wrestle with the operating-layer-over-the-model question. The academic literature shipped the first pre-deployment diagnostic for which council topology will fail on which question.

Three different layers, one architectural conclusion: the council pattern isn't a feature. It's a configurable communication topology with a measurable spectral profile, and strategy selection is the practitioner's predictive map for which topology fails first on which question.

Pick the right topology for the question, not the same topology for every question. That's a math problem with a measurable answer now.

Try a council that's structurally heterogeneous on a real decision and see whether the topology you'd intuit is the one the math predicts.

Try it free. shingik.ai — no signup.