← Back to All Openings

Principal Diffusion Model Architect – Counterfactual Projection

corpora-jobs-1778796293285-db9d41c6 - Frontier Development
ML/AI EngineerPrincipal1 position

Why This Role is Different

Frontier Development Role

You’ll architect the next‑generation diffusion engine that projects adversarial perturbations onto the data manifold while respecting causal constraints. Your work will merge deep generative modeling with causal reasoning to produce realistic, actionable counterfactuals across vision, language, and graph domains.

The Frontier Element

Designing a diffusion model that simultaneously enforces manifold fidelity, causal consistency, and cross‑modal coherence pushes the limits of generative AI in safety‑critical applications.

🔬

Project Context

Research Area

Diffusion-Constrained Manifold Projection (ACE‑DMP)

From: Counterfactual Explanation Robustness to Adversarial Noise

Why This Role is Critical

The ACE‑DMP component is the linchpin that guarantees counterfactuals stay on the true data manifold; without a robust diffusion engine the explanations become unrealistic and untrustworthy.

What You Will Build

A lightweight, high‑throughput diffusion backbone (DDPM/DM‑Solver) that incorporates causal guidance, supports multimodal inputs, and delivers counterfactuals in real time.

🛠

Key Responsibilities

  • Engineer a diffusion architecture (DDPM, DDIM, DPM‑Solver) optimized for multimodal data and causal guidance.
  • Implement causal conditioning mechanisms that steer the diffusion process along learned graph edges.
  • Develop efficient sampling pipelines (e.g., accelerated denoising, adaptive step sizing) to meet real‑time inference constraints.
  • Integrate with the Causal Discovery Engine to receive edge‑confidence scores and apply them during diffusion guidance.
  • Benchmark manifold adherence, fidelity, and computational latency against state‑of‑the‑art baselines.
🎯

Required Skills & Experience

Technical Must-Haves

Deep generative modeling (diffusion, GANs)

Expert
Designing, training, and deploying diffusion models for high‑dimensional data.

CUDA, PyTorch/TensorFlow, and distributed training

Expert
Optimizing GPU utilization and scaling training to billions of parameters.

Causal conditioning and graph‑aware inference

Advanced
Embedding causal constraints into diffusion guidance.

Performance profiling and latency optimization

Advanced
Ensuring sub‑second inference for real‑time counterfactual generation.

Experience Requirements

  • 5+ years in generative modeling, with a track record of publishing diffusion‑based research.
  • Hands‑on experience deploying large‑scale diffusion models in production.
  • Experience with multimodal data pipelines (vision, language, graph).

Education

PhD in Machine Learning, Computer Science, or a related field with a focus on generative modeling.

Preferred Skills

  • Knowledge of graph‑aware diffusion architectures (e.g., GNN‑conditioned DDPM).
  • Experience with medical imaging or other high‑stakes domains requiring manifold fidelity.
🤝

You Will Thrive Here If...

  • Excels at turning research prototypes into shipping systems.
  • Comfortable taking ownership of end‑to‑end performance and reliability.
📈

Impact & Growth

12-Month Impact

Deliver a diffusion‑based counterfactual engine that achieves >95% manifold adherence, reduces inference latency to <200 ms, and scales to multimodal inputs, thereby enabling trustworthy explanations in real‑time deployments.

Growth Opportunity

Lead the generative AI division, setting the vision for future diffusion‑based safety and interpretability solutions.

Ready to Push the Boundaries?

If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.