You will architect the most reliable explainability engine, blending perturbation and gradient signals into a consensus that withstands adversarial manipulation.
By aligning perturbation maps with gradient maps via Wasserstein-style alignment, you will create the first attribution method that is provably robust to gradient masking and adversarial perturbations.
Hybrid attribution for robust explainability
From: Gradient Masking in Adversarial Training and Explainability
PGCA fuses perturbation and gradient explanations; this role will build the consensus engine, optimize its efficiency, and ensure robustness under attacks.
A PGCA library that runs offline and online, providing high-fidelity attribution maps that survive adversarial masking, and integrates with the FGMF pipeline.
PhD in Computer Science, Electrical Engineering, or related field with emphasis on XAI
Within a year, release PGCA as an open-source library that achieves ≥10% higher faithfulness than Grad‑CAM++ on ImageNet under AutoAttack, and integrate it into the company's AI platform.
Lead the explainability roadmap, scaling PGCA to multi-modal and multi-agent settings, and mentor a team of XAI engineers.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.