← Back to All Openings

Principal Applied Scientist – Adaptive Uncertainty‑Driven Budget & LLM Counterfactual Reward Shaping Engineer

corpora-jobs-1778796293285-db9d41c6 - Frontier Development
Applied ScientistPrincipal1 position

Why This Role is Different

Frontier Development Role

Lead the creation of a unified framework that lets MARL agents decide how much explanation to produce, generate counterfactual scenarios on the fly, and embed audit‑ready logs—all while keeping inference cost minimal. Your work will be the safety net that keeps agents trustworthy under adversarial conditions.

The Frontier Element

You will integrate lightweight uncertainty estimation (MC‑Dropout, ensembles) with LLM inference in a single, latency‑bounded pipeline, a capability that has never been demonstrated at scale in multi‑agent RL.

🔬

Project Context

Research Area

Uncertainty‑Driven Explanation Budgeting, LLM‑Generated Counterfactual Reward Shaping, and Continuous Auditing

From: Explainability Budget Optimization for Sample Efficiency

Why This Role is Critical

This role orchestrates the dynamic allocation of explanation resources, LLM‑guided counterfactual generation, and real‑time compliance logging—critical for safety, regulatory alignment, and sample‑efficiency.

What You Will Build

An end‑to‑end system that estimates per‑decision uncertainty, decides token budget allocation, generates counterfactual explanations via LLMs, shapes rewards, and logs immutable audit trails.

🛠

Key Responsibilities

  • Design a lightweight, calibrated uncertainty estimator that feeds into a token‑budget policy.
  • Build an LLM interface that generates counterfactual scenarios and paraphrases policy logic in real time.
  • Develop reward‑shaping modules that incorporate LLM counterfactuals to accelerate credit assignment.
  • Implement immutable audit logging (e.g., blockchain anchoring) for every decision trace and explanation.
  • Create a continuous feedback loop that maps expert annotations to policy updates via few‑shot learning.
🎯

Required Skills & Experience

Technical Must-Haves

Uncertainty Estimation in Deep Learning

Expert
MC‑Dropout, ensembles, Bayesian neural nets.

Large Language Model Fine‑Tuning & Prompt Engineering

Advanced
Generating counterfactuals and explanations.

Reinforcement Learning Reward Shaping

Advanced
Integrating LLM outputs into the reward signal.

Immutable Logging & Blockchain Anchoring

Proficient
Ensuring audit‑ready traceability.

Experience Requirements

  • 7+ years in applied AI with a focus on RL, uncertainty, and LLMs.
  • Demonstrated deployment of LLM‑based counterfactual generation in production.

Education

PhD in Machine Learning, Computer Science, or related field.

Preferred Skills

  • Experience with regulatory compliance (GDPR, AI Act) in AI systems.
  • Knowledge of adversarial robustness techniques for RL.
🤝

You Will Thrive Here If...

  • Comfortable owning a system from research to production with minimal hand‑offs.
  • Able to iterate quickly on complex pipelines while maintaining rigorous audit standards.
📈

Impact & Growth

12-Month Impact

Within 12 months, deliver a fully autonomous budget‑aware explanation engine that reduces human‑review workload by 70% and cuts sample complexity by 40% on a live MARL deployment.

Growth Opportunity

Expand the framework to support multi‑modal agents, scale to thousands of concurrent users, and lead the company’s compliance‑first AI strategy.

Ready to Push the Boundaries?

If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.