You’ll architect the next‑generation diffusion engine that projects adversarial perturbations onto the data manifold while respecting causal constraints. Your work will merge deep generative modeling with causal reasoning to produce realistic, actionable counterfactuals across vision, language, and graph domains.
Designing a diffusion model that simultaneously enforces manifold fidelity, causal consistency, and cross‑modal coherence pushes the limits of generative AI in safety‑critical applications.
Diffusion-Constrained Manifold Projection (ACE‑DMP)
From: Counterfactual Explanation Robustness to Adversarial Noise
The ACE‑DMP component is the linchpin that guarantees counterfactuals stay on the true data manifold; without a robust diffusion engine the explanations become unrealistic and untrustworthy.
A lightweight, high‑throughput diffusion backbone (DDPM/DM‑Solver) that incorporates causal guidance, supports multimodal inputs, and delivers counterfactuals in real time.
PhD in Machine Learning, Computer Science, or a related field with a focus on generative modeling.
Deliver a diffusion‑based counterfactual engine that achieves >95% manifold adherence, reduces inference latency to <200 ms, and scales to multimodal inputs, thereby enabling trustworthy explanations in real‑time deployments.
Lead the generative AI division, setting the vision for future diffusion‑based safety and interpretability solutions.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.