← Back to All Openings

Senior Research Engineer – Second-Order Optimization & Hessian Vector Product (SCOR-PIO 2.0)

corpora-jobs-1778796293285-db9d41c6 - Frontier Development
Research EngineerSenior1 position

Why This Role is Different

Frontier Development Role

Lead the frontier of second-order optimization, turning theory into a scalable engine that protects models from adversarial attacks while keeping gradients faithful for explainability.

The Frontier Element

You will pioneer a Hessian-vector product engine that runs in real-time on large vision transformers, enabling curvature-aware masking without the quadratic cost of full Hessians.

🔬

Project Context

Research Area

Second-order robust optimization for adversarial training

From: Gradient Masking in Adversarial Training and Explainability

Why This Role is Critical

SCOR-PIO 2.0 requires efficient HVP computation and curvature-aware regularization; this role will design, implement, and optimize the HVP pipeline.

What You Will Build

A production-ready SCOR-PIO 2.0 optimizer module that integrates with PyTorch/JAX, supports distributed training, and exposes APIs for curvature-based masking.

🛠

Key Responsibilities

  • Derive and implement efficient Pearlmutter-based HVP routines for arbitrary network architectures.
  • Integrate curvature regularization into the training loop, ensuring numerical stability and positive-semidefinite guarantees.
  • Optimize memory and compute for distributed GPUs, leveraging mixed precision and gradient checkpointing.
  • Benchmark robustness against AutoAttack, FGSM, PGD, and measure impact on saliency fidelity.
  • Publish open-source library and technical docs for community adoption.
🎯

Required Skills & Experience

Technical Must-Haves

Pearlmutter trick and Hessian-vector product implementation

Expert
Implementing exact HVPs in deep nets

Second-order optimization algorithms (CG, Lanczos, Newton)

Advanced
Designing curvature-aware loss regularizers

PyTorch/JAX automatic differentiation and custom autograd

Expert
Extending autograd for HVP

Distributed training (Horovod, NCCL, Ray)

Advanced
Scaling HVP across multi-node clusters

Numerical linear algebra (PSD matrices, eigenvalue bounds)

Advanced
Ensuring stability of curvature terms

Experience Requirements

  • 5+ years in deep learning research focusing on optimization or adversarial robustness
  • Track record of publishing in top ML conferences (ICLR, NeurIPS, ICML)

Education

PhD in Computer Science, Applied Mathematics, or related field with focus on optimization

Preferred Skills

  • Experience with Hessian-free methods in large-scale vision models
  • Knowledge of robust training pipelines and AutoAttack evaluation
🤝

You Will Thrive Here If...

  • Thrives in high-ambiguity research environments and can turn theoretical insights into production code
  • Self-motivated to ship prototypes and iterate rapidly
📈

Impact & Growth

12-Month Impact

Within 12 months, deliver a SCOR-PIO 2.0 optimizer that boosts robust accuracy by ≥5% on ImageNet under AutoAttack while preserving saliency fidelity, and publish a benchmark paper.

Growth Opportunity

Lead a cross-functional team to extend curvature-aware masking to multi-agent coordination and edge deployment, eventually shaping the company's robust AI platform.

Ready to Push the Boundaries?

If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.