Contraria

Ensemble learning, but for generative AI.

Contraria is an experiment: instead of trusting one model, it orchestrates multiple advisors, forces them to critique and revise each other, then synthesizes a final answer. The result is a practical test of whether model ensembles produce better real-world chatbot quality.

Why it is interesting

Most chat apps optimize for speed from a single model. Contraria optimizes for decision quality by harvesting disagreement, surfacing tradeoffs, and combining the strongest ideas from different model priors.

What it tests

Can a coordinated group of models beat a single model in human preference? Contraria collects blinded pairwise feedback so the experiment is measurable, not just anecdotal.

How it feels

You see each advisor think, refine, and react to peers in near real time. It is not a black box. You can inspect exactly how the answer evolved from first draft to final synthesis.

How Contraria works

1. Generate

Each advisor independently answers your prompt.

2. Self-review

Each advisor critiques and revises its own answer for multiple rounds.

3. Peer review

Advisors critique each other and revise again using cross-model feedback.

4. Synthesize

The Overseer composes one final response from the strongest advisor content.

Many minds, one smarter answer.

Try the experiment yourself and compare how ensembles change output quality.