← Back to All Openings

Staff Applied Scientist – Saliency-Guided Adaptive Masking (SGAM)

corpora-jobs-1778796293285-db9d41c6 - Frontier Development
Applied ScientistStaff1 position

Why This Role is Different

Frontier Development Role

You will craft the next generation of explainable defenses, turning saliency signals into protective masks that are both auditable and performance-friendly.

The Frontier Element

By fusing Grad-CAM++ approximations with learned attention, you will create the first real-time, interpretable masking layer that can be audited by regulators and visualized by operators.

🔬

Project Context

Research Area

Saliency-guided adaptive masking for explainable adversarial defense

From: Gradient Masking in Adversarial Training and Explainability

Why This Role is Critical

SGAM requires a novel attention module that predicts saliency maps and generates interpretable masks; this role will design, train, and validate that module.

What You Will Build

A lightweight SGAM layer that can be inserted into CNNs and ViTs, producing visualizable masks and improving robustness without sacrificing accuracy.

🛠

Key Responsibilities

  • Design the attention-based mask generator architecture and train it jointly with the backbone using saliency loss.
  • Implement efficient Grad-CAM++ approximations suitable for large-scale models.
  • Validate mask interpretability through user studies and audit logs.
  • Integrate SGAM into the adversarial training pipeline and measure impact on robustness and accuracy.
  • Develop visualization tools for mask inspection and regulatory compliance dashboards.
🎯

Required Skills & Experience

Technical Must-Haves

Attention mechanisms and graph-based context aggregation

Expert
Building SGAM's global attention module

Grad-CAM++ and lightweight saliency approximations

Advanced
Generating saliency maps for mask guidance

Computer vision model architectures (CNN, ViT)

Expert
Deploying SGAM across modalities

Explainability frameworks (SHAP, Integrated Gradients)

Advanced
Benchmarking SGAM's faithfulness

Python, PyTorch, TensorBoard, visualization libraries

Expert
Implementing and debugging SGAM

Experience Requirements

  • 3+ years of applied research in explainable AI or computer vision
  • Published work on saliency-guided regularization or interpretability

Education

PhD or Master’s in Computer Science with focus on CV or XAI

Preferred Skills

  • Experience with regulatory compliance in medical imaging or autonomous driving
  • Knowledge of edge deployment frameworks (ONNX, TensorRT)
🤝

You Will Thrive Here If...

  • Comfortable with building end-to-end systems from research to production
  • Able to communicate complex explainability concepts to non-technical stakeholders
📈

Impact & Growth

12-Month Impact

Deliver a SGAM module that improves robust accuracy by ≥3% on ImageNet under PGD while producing audit-ready masks, and integrate it into the company's flagship vision product.

Growth Opportunity

Expand SGAM to multi-agent explainability pipelines and lead the company's explainability strategy across domains.

Ready to Push the Boundaries?

If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.