Today we're launching Chairman Synthesis, a new council strategy in Shingikai. Shipping it is a good moment to write about the research that inspired it, and the broader idea behind how we've been building the product.

The naive version of "use multiple AIs" is just asking the same question three times and averaging. That doesn't work. What you get is confident noise — models that agree on the wrong thing, or cancel each other out without resolution. More voices without structure doesn't reduce the problem. It scales it.

So what does work?

One of the clearer pieces of research on this comes from Andy Hall's work on LLM governance. He studied what happens when you arrange AI models into a specific deliberation pattern: models critique and endorse each other's answers, and a synthesizer — a Chairman — integrates the full record into a final answer. Hall found that this architecture outperforms simple majority voting on the tasks he tested. The key finding isn't that more models help. It's that the structure of how they engage matters.

This runs counter to how most people think about the problem.

The Voting Instinct

When people first encounter the idea of an AI council, the mental model is usually democratic. Ask five models. See what three of them agree on. Go with the majority. It feels rigorous.

The problem is that majority voting assumes the value is in the count of opinions. But AI models aren't independent voters. They share training data, they share blind spots, and they share failure modes. Three models all confidently wrong doesn't become right because of consensus.

What the Chairman architecture does differently is treat the endorsement tally as a signal, not a verdict. The real work happens before the tally. It happens when models are forced to read each other's answers and explain, in specific terms, what they think is weak, what they think is strong, and which answer they'd actually stand behind. That structured critique is where disagreements surface. It's where overconfidence gets called out. It's where the council's actual intelligence shows up.

The synthesizer sees all of it — not just the raw answers but the full record of who critiqued whom and why. That context is what enables a good synthesis. The Chairman isn't a tiebreaker. It's an integrator.

What We Took From It

What we took from Hall's research, at Shingikai, wasn't that Chairman Synthesis is the one right way to build a council. It's something more general: structure is the thing. How you arrange the deliberation shapes what comes out of it. That's the idea we've been building around.

A single AI model is built to produce a fluent, coherent answer. It's good at that. But fluency can mask the thing you actually want to know: where is this weakest? A well-structured council creates conditions where that weakness has to surface. When models have to read each other's work, when they have to commit to a preferred answer, when they have to defend a position — those are different cognitive demands than just producing an answer in isolation. The structure generates information that no individual model produces on its own.

This isn't magic. It's the same reason good human organizations don't just poll their smartest person and move on. How you engage with a problem shapes the quality of what comes out.

Different Questions, Different Structures

Once you accept that the architecture is doing the work, the next question follows: what architecture for what problem? That's a design question, and we don't think there's one answer.

An adversarial question — where you want someone to stress-test a position and attack its weakest assumptions — benefits from an adversarial structure. Red Team vs. Blue Team. Formal opposition, formal defense. That structure does something different than synthesis.

An open-ended creative question might call for sequential refinement, where each model builds on the last rather than critiquing it. A prioritization question might benefit from elimination, where weak options get cut until the council converges. A time-sensitive question might just need the best single answer, fast, not a full deliberation.

Chairman Synthesis is a strong fit for questions where integration is the hard part — where you have multiple defensible perspectives and the value is in pulling them together well. That's exactly the kind of problem where Hall's findings are most compelling. But the deeper point is that this is a family of architectures, and picking the right one is part of the problem.

The Architecture Is the Product

This is why we've built seven council strategies rather than one. Traditional Council, Round Robin, Survivor, Collaborative Editing, Red Team vs. Blue Team, Quick Take, and Chairman Synthesis — each is a different deliberation architecture, matched to a different kind of question. Chairman Synthesis is the newest, inspired by the pattern Hall's research points at: that structured critique and synthesis can outperform simpler aggregation. The others exist because other problems have different shapes, and we've learned from building and using them that no single structure is the right answer everywhere.

What they share is the underlying premise: putting structure around how AI models engage with a problem produces better outputs than aggregating independent attempts. The intelligence isn't only in the models. It's in the architecture.

Chairman Synthesis is live in Shingikai today, alongside the six other strategies. If your question is one where multiple defensible perspectives need to be integrated — not just aggregated — it's a good place to start.

A single AI is confident. A well-structured council is calibrated. That difference matters most on the questions where being wrong is expensive.