Two AI agents independently research opposing sides of any question using live web sources, structured belief graphs, and semantic contradiction detection. Not another chatbot — a belief accumulation engine.
Live preview
Every piece of content ingested is parsed into typed, confidence-weighted belief nodes. The system knows what it doesn't know.
Process
Six coordinated phases turn a question into calibrated epistemic understanding — not just a list of arguments.
beliefs.before() — a structured brief injected into GPT-4o. The verdict is grounded in the belief graph, not raw web text.Capabilities
Traditional LLM research accumulates text. This system accumulates understanding.
Competitive landscape
Most tools accumulate text. This one accumulates structured understanding with calibrated uncertainty.
| Capability | Belief Engine | ChatGPT Deep Research | Perplexity | Elicit |
|---|---|---|---|---|
| Live web sources | ✓ | ✓ | ✓ | ~ |
| Adversarial agent structure | ✓ | ✗ | ✗ | ✗ |
| Belief graph (typed, weighted) | ✓ | ✗ | ✗ | ✗ |
| Semantic contradiction detection | ✓ | ✗ | ✗ | ✗ |
| Information-gain driven research | ✓ | ~ | ✗ | ✗ |
| Calibrated uncertainty output | ✓ | ✗ | ✗ | ~ |
| Persistent knowledge base | ✓ | ✗ | ✗ | ✗ |
| PDF / DOCX report export | ✓ | ~ | ✗ | ~ |
Epistemic design
Clarity is not a quality score. It's epistemic readiness — computed across four independent channels.
Under the hood
Three agents, one shared namespace, one belief graph. The SDK handles fusion, contradiction detection, and move ranking automatically.
Run your first debate in under a minute. No account needed. Results stream live as agents research in real time.