You will build the trust backbone of our multi‑agent system, turning abstract Bayesian updates into a robust, low‑latency trust‑propagation service that survives benign noise and active adversaries alike. Your work will be the invisible guard that keeps the entire JIT framework from collapsing.
This role fuses Bayesian trust modeling, blockchain‑based identity validation, and real‑time inference on heterogeneous devices—an unprecedented combination that has never been deployed at scale in multi‑agent AI.
Dynamic Trust‑Score Propagation (DTSP)
From: Cascading Misinterpretation and Suboptimal Joint Actions
DTSP is the safety net that prevents the sink effect and limits the spread of misinterpretation, making it essential to the overall JIT framework.
A lightweight, real‑time trust‑score engine that attaches Bayesian trust scores to every message, updates them via a Bayesian filter, and interfaces with CGCE and JPRO‑SOB while supporting cross‑chain DID validation and adversarial threat models.
PhD in Computer Science or AI Safety with a focus on trust or probabilistic modeling.
Deploy a trust‑propagation engine that cuts cascading misinterpretation by 25% and raises resilience to adversarial attacks, as measured by simulation and real‑world edge deployments within 12 months.
Advance to head of trust & safety for our multi‑agent platform, shaping industry standards for trustworthy autonomous systems.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.