You will build the trust‑engine that turns the debate into a compliant, auditable system. From hash‑chain logs to HITL orchestration, you’ll make sure every decision can be traced and verified.
This role pioneers a runtime, agent‑centric provenance framework that satisfies ISO/IEC 23894 and the EU AI Act—an architectural property that has never been realized in multi‑agent AI systems.
Transparent Provenance and Regulatory Compliance Layer
From: Hallucination Amplification in Multi‑Agent Debate
HEAD must satisfy emerging AI governance standards (ISO/IEC 23894, EU AI Act) by providing immutable, cryptographically verifiable audit trails and HITL hooks. This requires a deep blend of secure systems, blockchain, and policy enforcement.
A runtime provenance engine that logs every claim, evidence source, and argumentative step with hash chains, integrates HITL interrupt signals, and exposes a compliance API for regulators.
PhD or Master’s in Computer Science, Information Security, or a related field.
By 12 months, deliver a fully auditable, HITL‑enabled debate system that satisfies ISO/IEC 23894 and EU AI Act requirements, enabling the company to launch in regulated medical and policy domains without additional compliance overhead.
Scale the provenance architecture across the entire product portfolio, becoming the lead architect for AI governance and compliance in a frontier AI company.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.