← Back to All Openings

Principal ML/AI Engineer – Perturbation-Gradient Consensus Attribution (PGCA)

corpora-jobs-1778796293285-db9d41c6 - Frontier Development
ML/AI EngineerPrincipal1 position

Why This Role is Different

Frontier Development Role

You will architect the most reliable explainability engine, blending perturbation and gradient signals into a consensus that withstands adversarial manipulation.

The Frontier Element

By aligning perturbation maps with gradient maps via Wasserstein-style alignment, you will create the first attribution method that is provably robust to gradient masking and adversarial perturbations.

🔬

Project Context

Research Area

Hybrid attribution for robust explainability

From: Gradient Masking in Adversarial Training and Explainability

Why This Role is Critical

PGCA fuses perturbation and gradient explanations; this role will build the consensus engine, optimize its efficiency, and ensure robustness under attacks.

What You Will Build

A PGCA library that runs offline and online, providing high-fidelity attribution maps that survive adversarial masking, and integrates with the FGMF pipeline.

🛠

Key Responsibilities

  • Implement perturbation mask generation (zero, Gaussian noise) and efficient forward passes.
  • Design the consensus amplification and Wasserstein alignment stages for spatial coherence.
  • Benchmark faithfulness metrics (GHR, ASR-M) against state-of-the-art explainers under AutoAttack.
  • Optimize for GPU parallelism and memory reuse to keep inference latency < 50 ms for real‑time use.
  • Build a user‑facing API and visualization suite for regulatory audit and debugging.
🎯

Required Skills & Experience

Technical Must-Haves

Perturbation-based attribution methods (e.g., LIME, SHAP, Integrated Gradients)

Expert
Understanding strengths and weaknesses

Wasserstein distance and optimal transport

Advanced
Implementing alignment for perturbation maps

GPU programming (CUDA, cuDNN)

Expert
Optimizing PGCA pipeline

Python, PyTorch, JAX, and performance profiling

Expert
Building high-performance explainability modules

Robustness evaluation (AutoAttack, FGSM, PGD)

Advanced
Testing PGCA under attacks

Experience Requirements

  • 7+ years in AI research with focus on explainability and robustness
  • Published work on attribution methods or adversarial defense

Education

PhD in Computer Science, Electrical Engineering, or related field with emphasis on XAI

Preferred Skills

  • Experience with regulatory compliance for AI in healthcare or autonomous systems
  • Knowledge of model compression and edge deployment
🤝

You Will Thrive Here If...

  • Excels at bridging research and production, delivering production-grade code
  • Driven by curiosity and a desire to push explainability boundaries
📈

Impact & Growth

12-Month Impact

Within a year, release PGCA as an open-source library that achieves ≥10% higher faithfulness than Grad‑CAM++ on ImageNet under AutoAttack, and integrate it into the company's AI platform.

Growth Opportunity

Lead the explainability roadmap, scaling PGCA to multi-modal and multi-agent settings, and mentor a team of XAI engineers.

Ready to Push the Boundaries?

If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.