>_

Why Your Claude Code Agents Agree With Each Other

Robin||5 min
claude-codeagentsopen-sourceanalysis
Why Your Claude Code Agents Agree With Each Other
Listen to this article (5 min)

If you run multiple Claude Code agents on the same problem - engineer, security reviewer, architect, researcher - you probably expect four different perspectives.

You are getting one perspective repeated four times with different formatting.

A NeurIPS 2024 study proved it: same-model agents prompted to disagree converge anyway. The shared training distribution is stronger than your system prompt. More agents does not mean more perspectives. It means more confidence in the same blind spot.

TL;DR

Role prompts create the illusion of diverse analysis. Structural divergence - different output formats, required contradictions, distinct cognitive modes - creates actual diversity. Quantum Lens is an open source Claude Code scenario that enforces this architecturally.

Want the foundational patterns first? The free 3-pattern guide covers memory, delegation, and knowledge graphs at concept level.

Why Do Claude Code Agents Converge on the Same Blind Spots?

Because role labels change what an agent talks about, not how it thinks. The underlying cognitive pattern - how it processes information, what it notices, what it ignores - stays identical across all your "different" agents.

Run a code architecture review with four agents. Compare outputs. 70-80% reasoning overlap. The security agent flags the same three risks the engineer already found. The architect agrees with the researcher almost word-for-word.

Personas are costumes. Not cognitive modes.

Standard Multi-Agent Analysis
  • -5 agents with different role prompts
  • -70-80% reasoning overlap in outputs
  • -Convergence on shared blind spots
  • -More agents = more confidence in same bias
Structural Divergence (Quantum Lens)
  • +7 lenses with distinct cognitive modes
  • +Required output format differences per lens
  • +Each lens MUST contradict naive reading
  • +Agreement triggers suspicion, not confidence

How Does Structural Divergence Fix Multi-Agent Analysis?

By making convergence architecturally difficult instead of relying on prompts to create disagreement. Each of the 7 cognitive lenses has hard constraints that force genuinely different processing - not just different topics.

The Void Reader only looks at what is missing. It cannot comment on what is present. The Paradox Hunter only finds contradictions that function as features - not bugs. The Failure Romantic extracts signal from dead ends and limitations. The Temporal Archaeologist detects frozen processes - things that were becoming something else when they got stuck.

Each lens produces a unique output section no other lens can use, with voice markers that force different language patterns and a mandatory requirement to contradict at least two claims from the naive reading.

When two lenses agree, the system flags it as a potential shared misconception. Agreement is suspicious. Disagreement is signal.

Want the full system blueprint? Get the free 3-pattern guide.

Quantum Lens Architecture
Barrier TaxonomyClassify every limitation: assumed, mutable, temporal, or immutable
Solution Engine4 modes: repo eval, problem-solve, contra-mode, cascade
Anti-Convergence LayerRequired contradictions, unique output formats, voice isolation
Perception Engine7 cognitive lenses in parallel - decompose, diverge, interfere, converge

What Does a Structurally Divergent Analysis Actually Surface?

Genuinely different angles on the same input - not variations on the same conclusion. Here is a concrete example.

Take the claim "AI agents will replace most knowledge workers by 2030." Standard multi-agent review: four agents broadly agree on a timeline, debate the percentage, converge on "yes, partially, with caveats." Barely useful.

Quantum Lens output is different. The Void Reader identifies that every prediction assumes current organizational structures remain static - nobody is modeling how organizations restructure around AI. The Paradox Hunter finds that the most "replaced" knowledge workers are simultaneously becoming more valuable as AI trainers - a contradiction that is actually a feature of the transition. The Failure Romantic extracts signal from previous automation waves that failed (1990s expert systems, 2010s RPA) and what those failures teach about current predictions.

The Interference Reader - a meta-lens reading only what other lenses produced - finds a pattern: every lens independently surfaced the same structural gap. The predictions model AI capability growth but none model human adaptation speed. That becomes the Killer Question: "What if the bottleneck is not AI capability but organizational adaptation velocity?"

One analysis. Seven genuinely different angles. A question that reframes the entire problem.

Perception Pipeline
Input decomposed into 5-12 atomic claims
v
7 lenses analyze atoms in parallel (Wave 1 + 2)
v
Interference Reader detects meta-patterns across lenses
v
Killer Question formulated from convergence gaps
v
Breakthrough Score assigned (1-10 with confidence bands)
v
Score 7+ auto-saved for cross-analysis tracking

What Research Backs This Approach?

10 peer-reviewed sources, not vibes. The anti-convergence architecture comes from NeurIPS 2024 findings on multi-LLM debate. The lens design draws on the PRISM Framework (Diamond 2025) - 7 neuroscience-grounded worldviews. Decomposition uses Atoms of Thought (Teng 2025) for Markov-property claim splitting.

The cognitive modes are modeled on neurodiverse processing styles - autistic systemizing, ADHD divergent linking, dyslexic spatial reasoning. Not as labels. As structurally distinct information processing patterns.

Every barrier identified gets classified: assumed (default - prove it is real), mutable (engineering can change it), temporal (known timeline), or immutable (hard proof required). Burden of proof is on immutability, not possibility.

How Do You Get Started With Zero Lock-In?

Clone the repo, copy one folder, run a command. Quantum Lens is open source (MIT) with three independent tiers:

  1. Core: Claude Code only - full perception engine, general solution mode
  2. Web-Enhanced: Add Firecrawl MCP for URL scraping and repo analysis
  3. Full: Add Kairn MCP for persistent storage and cross-analysis tracking

Nothing breaks without the optional components. Run /quantum-lens on anything you want to challenge.

This pairs naturally with agent delegation for routing analysis tasks, with trait composition for generating specialized agent profiles, and with the planning framework that structures workflows around these tools.

The complete scenario - 12 agents, 5 commands, 7 cognitive lenses, scoring rubrics, and output templates - is on GitHub.

FAQ

How is Quantum Lens different from just running multiple Claude agents?+
Standard multi-agent setups give agents different role labels but the same cognitive pattern. Quantum Lens enforces structural divergence through different output formats, required contradictions, and unique cognitive modes per lens.
Do I need any special setup beyond Claude Code?+
No. The core tier works with Claude Code alone. Optional tiers add Firecrawl for web scraping and Kairn for persistent storage, but neither is required. Clone the repo, copy the scenario folder, done.
What kind of input works best with Quantum Lens?+
Anything with embedded assumptions - strategy decisions, architecture proposals, industry predictions, framework evaluations. The lenses surface what is missing and assumed, not what is stated.
Can I customize which lenses run or add my own?+
Yes. The lens-calibrate command enables or disables lenses and adjusts depth defaults. An Extension Protocol lets you create custom lenses with their own cognitive modes, output sections, and voice constraints.
Does this work with models other than Claude?+
The scenario is built for Claude Code's agent system. The architecture patterns - structural divergence, required contradictions, voice isolation - are transferable to any multi-agent setup.

>_ Get the free Claude Code guide

>_ No spam. Unsubscribe anytime.

>_ Related