You’ll pioneer a privacy‑aware causal discovery engine that powers adversarially robust counterfactual explanations. Your work will sit at the intersection of causal inference, differential privacy, and adversarial machine learning, enabling trustworthy explanations in multi‑modal, multi‑agent systems.
Building a causal graph that is both statistically sound and privacy‑preserving in high‑dimensional multimodal settings pushes the boundary of what causal discovery can achieve in production adversarial environments.
Causal Graph Learning for Adversarial Steering
From: Counterfactual Explanation Robustness to Adversarial Noise
The FCA depends on a high‑fidelity, privacy‑preserving causal graph to steer perturbations along semantically valid edges; without it the counterfactuals become spurious and trust erodes.
A scalable causal discovery engine that learns directed acyclic graphs from multimodal data, embeds differential privacy guarantees, and exposes a causal API for downstream modules.
PhD in Statistics, Computer Science, or a related field with a focus on causal inference.
Within 12 months, deliver a causal discovery service that reduces spurious counterfactuals by 90%, enabling downstream modules to generate actionable explanations that remain valid under adversarial perturbations.
Lead a dedicated causal inference team, shaping the next generation of privacy‑preserving, causally grounded AI systems.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.