Panel
The panel is the set of models that participate in a debate. The default panel isclaude-opus, gpt, gemini, grok, and nemotron. Each alias maps to a specific model:
| Alias | Model |
|---|---|
claude-opus | anthropic/claude-opus-4.6 |
claude | anthropic/claude-sonnet-4-6 |
gpt | openai/gpt-5.4 |
gemini | google/gemini-3.1-pro-preview |
grok | x-ai/grok-4.20-beta |
nemotron | nvidia/nemotron-3-super-120b-a12b |
--panel. Duplicate aliases are supported for single-model multi-agent debates:
Agent Identity
When a panel contains duplicate model aliases, each agent receives a unique identity by appending a numeric suffix:claude-1, claude-2, claude-3. When aliases are unique, they are used as-is with no suffix.
Agent identities are used for:
- Reflection pairing — each agent sees its own previous response and the others’ responses
- Display labels — CLI and web UI show the agent identity, not the raw alias
- Stats tracking — per-agent token and cost breakdown alongside per-model rollups
- Transcript logging — the
agent_idfield inModelResponsepersists in JSON transcripts
display_label property on ModelResponse returns the agent identity if set, falling back to the model alias for backward compatibility with older transcripts.
Rounds
A debate runs in rounds. Round 0 is the initial round — every panel model receives the query and responds independently. Rounds 1+ are reflection rounds — each model sees its own previous response and all other models’ previous responses, then reflects and refines. The default is 1 reflection round. The maximum is 3. More rounds increase cost and latency but can surface deeper disagreements.Synthesis
After all reflection rounds complete, the synthesizer model receives the full debate transcript and distills it into a final answer. The synthesizer defaults toclaude-opus but can be any model alias. The synthesizer participates in the debate panel as well unless excluded explicitly.
Transcripts
Every debate produces aDebateTranscript — a complete structured record saved as JSON to ~/.mutual-dissent/transcripts/. Filename format: YYYY-MM-DD_shortid.json.
Each transcript includes:
- The original query
- Panel composition and synthesizer
- All rounds with per-model responses, token counts, latency, and routing decisions
- The synthesized final answer
- Aggregate stats: total tokens, per-model cost breakdown, total cost in USD
Roles
EachModelResponse carries a role field:
| Role | Round number | Description |
|---|---|---|
initial | 0 | First response to the query |
reflection | 1+ | Response after seeing others’ answers |
synthesis | -1 | Final synthesized answer |
Ground Truth Scoring
Pass--ground-truth to score the synthesis against a known-correct reference answer. The synthesizer model acts as judge and produces a score attached to the transcript’s metadata. This adds one API call.
Panelist Context
The Python API supports per-panelist context injection viapanelist_context — a dict mapping model alias to a context string prepended to that model’s prompts in every round. Used for RAG augmentation and cross-tool research experiments (CounterSignal payload injection, CounterAgent findings).