← Back to All Openings

Principal Graph-Conditioned Explanation Architect

corpora-jobs-1778796293285-db9d41c6 - Frontier Development
Research ScientistPrincipal1 position

Why This Role is Different

Frontier Development Role

You will architect the heart of our Joint Interpretability‑Trust framework, marrying cutting‑edge graph neural networks with transformer‑augmented LLMs to produce explainable, context‑aware diagnostics for multi‑agent systems. Your work will directly reduce cascading misinterpretation and unlock the trust‑propagation layers that follow.

The Frontier Element

This role pushes the boundary of explainable AI by integrating multimodal graph transformers and diffusion‑based explanation generation into an asynchronous, distributed agent network—a combination that has never been deployed at scale in production.

🔬

Project Context

Research Area

Contextual Graph-Conditioned Explanation (CGCE)

From: Cascading Misinterpretation and Suboptimal Joint Actions

Why This Role is Critical

CGCE is the linchpin that turns raw inter-agent messages into a structured, graph‑conditioned explanation space, enabling downstream trust and policy modules to detect semantic inconsistencies before they cascade.

What You Will Build

A production‑ready, modular explanation engine that ingests local observations and neighbor messages, constructs contextual graphs, applies transformer or GNN backbones (including dual‑UNet diffusion), and outputs real‑time, multimodal explanations that feed into DTSP and JPRO‑SOB.

🛠

Key Responsibilities

  • Design and implement the contextual graph representation pipeline for local observations and inter‑agent messages.
  • Prototype and benchmark transformer‑based and GNN‑based explanation backbones, including dual‑UNet diffusion, under real‑time constraints.
  • Integrate the explanation output with DTSP trust scores and JPRO‑SOB policy re‑optimization modules.
  • Optimize inference for edge and cloud deployments using CUDA, C++ extensions, and model pruning.
  • Validate consistency‑detection accuracy against simulated cascading misinterpretation scenarios and publish results.
🎯

Required Skills & Experience

Technical Must-Haves

Graph Neural Networks

Expert
Core to building the contextual graph backbone.

Transformer‑based Language Models

Expert
For encoding textual messages and generating explanations.

Diffusion Models (Dual‑UNet)

Advanced
To refine explanation quality across modalities.

Multimodal Graph Transformers

Advanced
To fuse text, image, and sensor data into a single graph.

PyTorch / TensorFlow

Expert
Primary frameworks for model development.

CUDA / C++ inference optimization

Advanced
For low‑latency edge deployment.

Explainability methods (Grad‑CAM, SHAP)

Proficient
To validate and enhance explanation fidelity.

Experience Requirements

  • 8+ years in ML research with a focus on graph neural networks and LLMs.
  • Published work in top-tier conferences (NeurIPS, ICML, ICLR) on explainability or graph‑augmented language models.
  • Hands‑on experience building production‑grade explainability systems for AI.

Education

PhD in Computer Science or Machine Learning, specialization in Graph Neural Networks or NLP.

Preferred Skills

  • Experience with multimodal transformers and vision‑language integration.
  • Edge‑device inference optimization and real‑time deployment.
  • Knowledge of reinforcement learning pipelines for downstream policy modules.
🤝

You Will Thrive Here If...

  • Thrives in high‑autonomy environments and loves solving undefined problems.
  • Builds end‑to‑end systems from research to production.
  • Shows a relentless drive to ship working prototypes.
📈

Impact & Growth

12-Month Impact

Within 12 months, deliver a production‑ready explanation service that cuts cascading misinterpretation by 30% in our simulated multi‑agent benchmarks, fully integrated into the JIT pipeline and available for downstream trust and policy modules.

Growth Opportunity

Evolve into the lead AI architect for multi‑agent interpretability across our product portfolio, shaping the next generation of trustworthy autonomous systems.

Ready to Push the Boundaries?

If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.