You will craft the next generation of explainable defenses, turning saliency signals into protective masks that are both auditable and performance-friendly.
By fusing Grad-CAM++ approximations with learned attention, you will create the first real-time, interpretable masking layer that can be audited by regulators and visualized by operators.
Saliency-guided adaptive masking for explainable adversarial defense
From: Gradient Masking in Adversarial Training and Explainability
SGAM requires a novel attention module that predicts saliency maps and generates interpretable masks; this role will design, train, and validate that module.
A lightweight SGAM layer that can be inserted into CNNs and ViTs, producing visualizable masks and improving robustness without sacrificing accuracy.
PhD or Master’s in Computer Science with focus on CV or XAI
Deliver a SGAM module that improves robust accuracy by ≥3% on ImageNet under PGD while producing audit-ready masks, and integrate it into the company's flagship vision product.
Expand SGAM to multi-agent explainability pipelines and lead the company's explainability strategy across domains.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.