Lead the design and deployment of the most advanced adversarial training loop for multi‑agent systems. You’ll build the evolutionary attacker generator, craft dynamic role policies, and ensure the resulting agents converge under Byzantine threat models while remaining interpretable.
This role pioneers the first end‑to‑end, evolutionary‑driven multi‑agent training framework that adapts roles in real time, a capability that has never been demonstrated at scale in hostile environments.
Dynamic Role-Based Adversarial Training (DRAT)
From: Adaptive Multi‑Agent Defense Against Adversarial Coordination
DRAT is the core of RACE’s adaptive learning loop, requiring a deep blend of multi‑agent reinforcement learning, evolutionary adversary design, and on‑the‑fly role re‑allocation to prevent over‑specialization and expose agents to unseen attack patterns.
A production‑grade DRAT pipeline that (1) generates evolutionary attacker populations, (2) orchestrates dynamic role assignment across Orchestrator, Executor, Ground, Critic, and Memory agents, and (3) integrates the hardened policies into the RACE coordination engine.
PhD in Computer Science, Robotics, or related field with a strong emphasis on reinforcement learning or adversarial machine learning.
Within 12 months, deliver a fully automated DRAT pipeline that reduces adversarial success rate by >70% in simulated UAV swarm scenarios, with provable convergence guarantees under bounded Byzantine fractions.
Lead the expansion of DRAT to new domains (cyber‑physical networks, decentralized finance), mentor a growing research team, and shape the next generation of adaptive multi‑agent training frameworks.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.