Lead the frontier of second-order optimization, turning theory into a scalable engine that protects models from adversarial attacks while keeping gradients faithful for explainability.
You will pioneer a Hessian-vector product engine that runs in real-time on large vision transformers, enabling curvature-aware masking without the quadratic cost of full Hessians.
Second-order robust optimization for adversarial training
From: Gradient Masking in Adversarial Training and Explainability
SCOR-PIO 2.0 requires efficient HVP computation and curvature-aware regularization; this role will design, implement, and optimize the HVP pipeline.
A production-ready SCOR-PIO 2.0 optimizer module that integrates with PyTorch/JAX, supports distributed training, and exposes APIs for curvature-based masking.
PhD in Computer Science, Applied Mathematics, or related field with focus on optimization
Within 12 months, deliver a SCOR-PIO 2.0 optimizer that boosts robust accuracy by ≥5% on ImageNet under AutoAttack while preserving saliency fidelity, and publish a benchmark paper.
Lead a cross-functional team to extend curvature-aware masking to multi-agent coordination and edge deployment, eventually shaping the company's robust AI platform.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.