You will architect and ship the evidence engine that keeps the debate honest. From vector‑search backends to policy‑driven query engines, you’ll turn raw knowledge into the fuel that powers every agent’s argument.
This role pushes the boundary of retrieval‑augmented reasoning by marrying LLMs with verifiable knowledge sources in a live, multi‑agent setting—an area where no existing product offers full end‑to‑end, low‑latency guarantees.
Agent‑Specific Evidence Retrieval and Query Policy
From: Hallucination Amplification in Multi‑Agent Debate
The HEAD framework’s core innovation is that every debating agent must autonomously fetch, vet, and prioritize evidence from curated knowledge bases. Building this retrieval backbone requires deep expertise in RAG, semantic search, and real‑time data pipelines.
A production‑grade, low‑latency retrieval engine that supports confidence‑weighted queries, integrates domain ontologies, and streams verified snippets to the debate orchestrator.
Master’s or PhD in Computer Science, Information Retrieval, or a related field.
Within 12 months, deliver a retrieval engine that reduces hallucination amplification to <3% by ensuring every claim is backed by a cryptographically verifiable evidence snippet, while keeping token usage below 60% of baseline.
Lead the evidence‑integration strategy across all high‑stakes product lines, shaping the next generation of verifiable AI reasoning systems.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.