Lead the frontier of causal inference in high‑stakes multi‑agent systems. You’ll design algorithms that turn noisy, partially observable logs into a principled causal fabric, enabling trustworthy blame signals that survive adversarial manipulation.
You’ll pioneer hybrid Bayesian‑neural causal discovery that blends PC/NOTEARS with graph‑neural‑network priors, achieving online, cycle‑aware learning in non‑stationary environments—a capability that has no commercial precedent.
Causal Discovery Layer of CRAN
From: Misattribution of Blame in Cooperative Multi‑Agent Systems
The causal graph is the foundation of blame attribution; without a robust, online‑learning causal model, the entire CRAN framework collapses.
An end‑to‑end causal discovery engine that learns Bayesian DAGs from multi‑agent execution logs, incorporates domain priors, detects latent cycles, and outputs uncertainty‑quantified influence structures for downstream modules.
PhD in Computer Science, Statistics, or related field with a focus on causal inference or probabilistic modeling.
Within 12 months, deliver a production‑ready causal discovery engine that processes millions of MAS log events per hour, producing a DAG with <5% false‑positive edge rate and <10% uncertainty variance, enabling downstream blame attribution to achieve 30% higher accuracy over baseline methods.
Lead a cross‑disciplinary team that expands CRAN to new domains (autonomous defense, supply‑chain logistics), mentor a research lab, and shape the company’s long‑term causal AI strategy.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.