Architect a cutting‑edge neuro‑symbolic system that lets agents reason over domain ontologies while learning from sparse interactions. Your work will make it possible to generate human‑readable, audit‑ready rationales on demand, a first for adversarial MARL.
You will pioneer a dynamic hypernetwork that generates task‑specific symbolic constraints on the fly, allowing the policy to adapt to evolving knowledge graphs without retraining the entire network.
Neuro‑Symbolic Hybrid Training with Knowledge Graphs for Explainable MARL
From: Explainability Budget Optimization for Sample Efficiency
This role is essential to fuse symbolic knowledge into policy networks, enabling cached feature‑level attributions and explicit rationales that satisfy regulatory mandates.
A hybrid policy architecture that interleaves neural policy layers with KG‑driven symbolic reasoning, a caching layer for reusable explanations, and a training pipeline that jointly optimizes performance and interpretability.
PhD in AI, Knowledge Representation, or related field.
By year‑end, deploy a neuro‑symbolic MARL agent that achieves ≥25% faster convergence than a purely neural baseline while producing audit‑ready explanations that pass an external regulatory review.
Scale the hybrid framework to multi‑modal domains and lead a research‑to‑product pipeline for a portfolio of regulated applications.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.