Architect the probabilistic inference backbone that lets multi‑agent systems reason about policy uncertainty in the presence of adversarial observation noise, enabling autonomous decision‑making under threat.
Your work will pioneer a joint GAN‑Bayesian inference pipeline that can be queried in real time on distributed agents, a novel contribution to robust MARL that blends generative modeling with hierarchical Bayesian inference.
Bayesian Policy Inference (BPI) with marginalization over generative observation model
From: Adversarial Observation Perturbations and Policy Inference
Essential to build the hierarchical Bayesian framework that produces robust policy posteriors under unseen adversarial perturbations.
Amortized variational inference engine, Monte Carlo integration module, policy prior integration with CRL, and a scalable inference runtime for distributed agents.
PhD in Statistics, Machine Learning, or Robotics with a focus on Bayesian methods.
Achieve a 25% improvement in cooperative task success under adversarial telemetry in simulated UAV swarms within 12 months, demonstrating the practical value of Bayesian policy inference.
Scale the inference engine to multi‑domain deployments (e.g., autonomous driving, cyber‑defense) and lead the integration of counterfactual explainability for human‑in‑the‑loop oversight.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.