← Back to All Openings

Lead Multi‑Modal Adversarial Recourse Engineer

corpora-jobs-1778796293285-db9d41c6 - Frontier Development
Applied ScientistSenior1 position

Why This Role is Different

Frontier Development Role

You’ll build the cross‑modal recourse engine that turns complex, adversarially perturbed inputs into clear, actionable explanations. Your work will fuse vision‑language models, graph reasoning, and medical‑domain standards to deliver recourse that is both robust and clinically usable.

The Frontier Element

Creating a unified recourse framework that simultaneously handles images, text, and graph data while withstanding prompt‑injection and cross‑modal consistency attacks is an unprecedented challenge at the frontier of explainable AI.

🔬

Project Context

Research Area

Multi‑Modal Adversarial Recourse Module (MARM)

From: Counterfactual Explanation Robustness to Adversarial Noise

Why This Role is Critical

MARM is essential for generating actionable counterfactuals that survive adversarial attacks across vision, language, and graph modalities; without it the system cannot provide trustworthy recourse in multi‑agent settings.

What You Will Build

A cross‑modal recourse engine that integrates VLMs, graph embeddings, and adversarial training to produce robust, interpretable counterfactuals, complete with HL7/FHIR‑compatible reports.

🛠

Key Responsibilities

  • Design and train vision‑language‑graph models that support adversarially robust embeddings and cross‑modal consistency losses.
  • Implement adversarial training pipelines (AdvPT, APT) tailored to multi‑modal data and integrate them with the recourse generation loop.
  • Develop HL7/FHIR‑compatible reporting modules that embed heatmaps, textual rationales, and actionable suggestions for clinicians.
  • Benchmark MARM against CARLA, RAG‑Anything, and other multimodal adversarial suites, iterating on robustness metrics.
  • Collaborate with the Diffusion Model Architect to use diffusion‑projected counterfactuals as inputs for recourse optimization.
🎯

Required Skills & Experience

Technical Must-Haves

Vision‑Language Models (e.g., CLIP, ViLT, BLIP)

Expert
Fine‑tuning and deploying VLMs for multimodal inference.

Graph Neural Networks and knowledge‑graph embeddings

Advanced
Representing and reasoning over relational data in recourse generation.

Adversarial training and robust optimization

Advanced
Building defenses against prompt‑injection and cross‑modal consistency attacks.

HL7/FHIR standards and clinical NLP

Proficient
Generating interoperable reports for electronic health records.

Experience Requirements

  • 5+ years in multimodal AI or applied research with a focus on explainability or robustness.
  • Published work on VLMs, graph reasoning, or medical AI.
  • Experience deploying ML models in regulated environments.

Education

PhD in Computer Science, Biomedical Engineering, or a related field with expertise in multimodal machine learning.

Preferred Skills

  • Experience with medical imaging datasets and regulatory compliance (HIPAA, GDPR).
  • Knowledge of prompt‑engineering and natural language attack surfaces.
🤝

You Will Thrive Here If...

  • Demonstrates a bias toward building end‑to‑end systems that can be shipped and iterated.
  • Shows comfort in cross‑disciplinary collaboration (clinical, data science, engineering).
📈

Impact & Growth

12-Month Impact

Within a year, deliver a production‑ready MARM pipeline that achieves >80% recourse success under adversarial attacks, produces HL7‑compatible reports, and is integrated into a live clinical decision support system.

Growth Opportunity

Scale the multi‑modal recourse team into a cross‑disciplinary AI‑for‑health division, driving new products that combine interpretability, robustness, and regulatory compliance.

Ready to Push the Boundaries?

If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.