Join the Frontier

corpora-jobs-1778796293285-db9d41c6 - Open Positions
Generated: 2026-05-14 23:06 | 45 positions across 45 roles
🌟

Our Mission

To pioneer frontier AI systems that safeguard and amplify human decision‑making, delivering real‑world impact through secure, privacy‑preserving, and explainable deep‑tech solutions.

We envision a world where autonomous agents operate safely and transparently across distributed networks, trusting each other and the data they share. By marrying quantum resilience, blockchain trust, and advanced generative models, we aim to set new standards for AI safety, reliability, and accountability at scale.

The corpora-jobs-1778796293285-db9d41c6 initiative is a bold, multi‑disciplinary effort to build the first end‑to‑end, privacy‑preserving, adversarially robust federated learning platform for autonomous swarms. It integrates quantum‑enhanced aggregation, zero‑knowledge trust ledgers, and LLM‑driven adversarial curricula to keep distributed agents safe from manipulation while preserving data confidentiality. Achieving this requires talent that can blend cutting‑edge research, systems engineering, and cryptographic design, turning theoretical breakthroughs into production‑ready technology.

📊

What We Are Hiring

45
Distinct Roles
45
Total Positions
8
Role Categories
15
Research Areas

Positions by Role Type

Role TypePositions
Applied Scientist12
Research Engineer8
Research Scientist8
Systems Engineer7
ML/AI Engineer4
Algorithm Developer3
Platform Engineer2
Infrastructure Engineer1

Positions by Seniority

LevelPositions
Principal17
Staff12
Senior10
Lead6
🔥

Our Culture

We are a tight‑knit squad of builders who thrive on ambiguity, relentless curiosity, and the drive to turn impossible ideas into reality. Our work ethic is relentless: we iterate fast, test hard, and never settle for “good enough.” Intellectual curiosity fuels us; we read papers, run experiments, and publish results. We’re not afraid to fail—each setback is a lesson that propels us forward. If you’re a self‑starter who loves to own a problem from scratch to production, you’ll fit right in.

Frontier Mindset

We treat every problem as an open frontier, pushing beyond what is currently possible in AI, systems, or security. Curiosity fuels experimentation, and failure is a stepping stone to discovery.
Daily, you’ll prototype novel algorithms, run bold experiments, and iterate rapidly—always asking, "What if we could do this in a way no one has yet tried?"

Get‑It‑Done Mentality

Ideas move from paper to product quickly. We value execution as much as insight, turning research into robust, deployable systems that operate in the real world.
You’ll own a feature from concept to deployment, debugging in production, and delivering measurable impact within weeks, not months.

Deep Ownership

Every team member owns the entire stack—design, code, data, and outcomes. Accountability and pride drive quality and speed.
You’ll take responsibility for the reliability of your component, from unit tests to monitoring dashboards, and champion its success or failure.

Intellectual Honesty

We confront hard truths, admit uncertainty, and share knowledge openly. This honesty accelerates progress and builds trust inside and outside the team.
You’ll critique your own work, peer‑review rigorously, and publish findings transparently—no hidden assumptions, just clear, reproducible science.
🚀

Why Join Us

  • You’ll work on the first quantum‑resilient federated learning system that runs on real UAV swarms, a technology that will shape future autonomous defense and logistics.
  • You’ll pioneer recursive zero‑knowledge proofs in a live federated ledger, setting industry‑first standards for privacy‑preserving AI.
  • You’ll build LLM‑driven adversarial curricula that generate instruction‑level attacks, pushing the boundaries of AI safety and robustness.
  • You’ll collaborate with world‑class researchers, engineers, and cryptographers—all focused on solving problems that no other company is tackling.

Benefits & Perks

  • Equity and a competitive salary that reflects frontier‑tech market rates.
  • Unlimited remote work with flexible hours—your lab can be anywhere.
  • Access to cutting‑edge hardware, including GPUs, edge devices, and experimental quantum nodes.
  • Continuous learning stipend, conference travel, and a culture that rewards publishing and open‑source contributions.
🔎

How We Hire

Beyond a stellar CV, we look for a proven ability to solve hard, open problems, a track record of turning theory into production, and a mindset that embraces failure as feedback. We value curiosity, deep ownership, and the willingness to question assumptions. Candidates who can articulate their thinking, collaborate cross‑functionally, and demonstrate impact in research or product will stand out.

💼

All Open Positions

Chapter 1: Adversarial Observation Perturbations and Policy Inference

Senior Generative Observation Modeling Engineer

Research EngineerSenior1 position
Lead the cutting‑edge development of conditional generative models that can reconstruct corrupted multimodal sensor data in real time, pushing the limits of GAN stability and privacy on distributed agents.
You’ll design a hybrid GAN that integrates physics‑based loss terms and differential privacy into a lightweight architecture that can run on UAV swarms, a first in the field of adversarial observation inference.

Principal Bayesian Policy Inference Architect

Research ScientistPrincipal1 position
Architect the probabilistic inference backbone that lets multi‑agent systems reason about policy uncertainty in the presence of adversarial observation noise, enabling autonomous decision‑making under threat.
Your work will pioneer a joint GAN‑Bayesian inference pipeline that can be queried in real time on distributed agents, a novel contribution to robust MARL that blends generative modeling with hierarchical Bayesian inference.
Design and implement a self‑learning LLM system that crafts semantic adversarial prompts to stress‑test multi‑agent policies, driving robust learning and uncovering hidden failure modes.
You’ll create an LLM‑based red‑team that can autonomously generate instruction‑level attacks, a frontier concept in AI safety and policy robustness that pushes beyond gradient‑based adversaries.

Chapter 2: Trust‑Aware Federated Aggregation in Multi‑Agent Settings

Lead the design and deployment of a quantum‑resilient aggregation engine that turns cutting‑edge quantum algorithms into a production‑grade component for federated learning. This role blends deep quantum theory with systems engineering to deliver the first end‑to‑end quantum‑secure FL pipeline.
You will pioneer the use of Grover‑style amplitude amplification and entanglement checks in a real‑world federated learning system, pushing the boundary of what is possible with today’s noisy intermediate‑scale quantum (NISQ) devices.
Architect and build the trust ledger that turns abstract reputation metrics into immutable, verifiable records. This role blends deep cryptography with distributed systems to deliver a production‑grade blockchain that satisfies regulators and incentivizes honest participation.
You will pioneer recursive zero‑knowledge proof integration in a real‑time federated learning ledger, enabling end‑to‑end privacy guarantees without exposing sensitive data, a first in the industry.
Own the intelligence that turns raw client updates into trustworthy, privacy‑preserving contributions. This role blends advanced privacy theory, Bayesian inference, and federated learning engineering to deliver a robust, adaptive trust system.
You will create the first end‑to‑end reputation‑driven DP scheduler that dynamically balances privacy and utility in non‑IID, adversarial federated settings, a breakthrough that has no direct precedent in the literature.

Chapter 3: Theory of Mind Defenses Against Communication Sabotage

Lead the frontier of adversarial curriculum design by marrying large‑language‑model semantics with multi‑agent reinforcement learning. Your work will set the standard for provably robust ToM policies that survive evolving sabotage tactics in real‑time, distributed environments.
You will pioneer a bi‑level Stackelberg game where an LLM oracle continuously mutates deceptive messages, creating an ever‑shifting threat space that forces agents to learn anticipatory reasoning. This is the first end‑to‑end, provably robust ToM curriculum that operates at scale.
Own the end‑to‑end design of a graph‑based belief regularizer that keeps multi‑agent reasoning robust to deceptive inputs while staying computationally efficient enough for real‑time deployment.
You will pioneer a dynamic, non‑monotonic belief graph that simultaneously tracks credibility, confidence, and structural support, and enforce it through a lightweight regularizer integrated into a generalized multi‑relational GCN.
Lead the creation of a real‑time, self‑verifying inference stack that detects distribution shift and adversarial messages with sub‑5 ms latency, while producing transparent audit logs for human operators.
You will combine amortized latent steering, self‑supervised adaptation, and cross‑modal manifold alignment into a single, inference‑time module that operates without back‑propagation, pushing the boundary of real‑time AI safety.

Chapter 4: Explainability Budget Optimization for Sample Efficiency

Lead the design and implementation of a token‑budgeted reasoning engine that lets MARL agents ask for counterfactual explanations on‑the‑fly, cutting inference cost while keeping explanations audit‑ready. Your work will be the linchpin that turns theoretical CoT ideas into a production‑grade, low‑latency system.
You will pioneer a hybrid RL–transformer architecture that learns to allocate a hard token budget in real time, a capability that has never been demonstrated at scale in multi‑agent settings.
Architect a cutting‑edge neuro‑symbolic system that lets agents reason over domain ontologies while learning from sparse interactions. Your work will make it possible to generate human‑readable, audit‑ready rationales on demand, a first for adversarial MARL.
You will pioneer a dynamic hypernetwork that generates task‑specific symbolic constraints on the fly, allowing the policy to adapt to evolving knowledge graphs without retraining the entire network.
Lead the creation of a unified framework that lets MARL agents decide how much explanation to produce, generate counterfactual scenarios on the fly, and embed audit‑ready logs—all while keeping inference cost minimal. Your work will be the safety net that keeps agents trustworthy under adversarial conditions.
You will integrate lightweight uncertainty estimation (MC‑Dropout, ensembles) with LLM inference in a single, latency‑bounded pipeline, a capability that has never been demonstrated at scale in multi‑agent RL.

Chapter 5: Partial Observability Amplification of Misalignment

You will pioneer a new class of belief‑aware abstractions that fuse information‑theoretic regularization with hierarchical policy decomposition. Your work will directly tackle the hardest credit‑assignment bottleneck in decentralized RL, enabling agents to reason about uncertainty at multiple temporal scales.
By embedding a variational bottleneck in belief space, you will create the first end‑to‑end differentiable pipeline that learns to discard spurious observations while preserving essential coordination cues—an approach that has never been demonstrated at scale in MARL.
You will build the core predictive engine that lets agents forecast their own future beliefs and observations, turning the BAAC framework from a conceptual design into a deployable system that runs at real‑time rates on edge devices.
By fusing belief and observation prediction in a single autoregressive loop, you will create the first model that can ‘imagine the next view’ while simultaneously predicting the next action—an ability that has never been realized at scale in multi‑agent settings.
You will create the first adversarial system that watches agents’ belief evolution in real time, detecting subtle misalignments before they cascade into catastrophic failures—an essential safety layer for any large‑scale, partially observable MARL deployment.
By treating belief trajectories as a sequence and training a discriminator to distinguish expert from agent trajectories, you will bridge adversarial learning, imitation learning, and multi‑agent RL in a way that has never been done at this scale.

Chapter 6: Gradient Masking in Adversarial Training and Explainability

Lead the frontier of second-order optimization, turning theory into a scalable engine that protects models from adversarial attacks while keeping gradients faithful for explainability.
You will pioneer a Hessian-vector product engine that runs in real-time on large vision transformers, enabling curvature-aware masking without the quadratic cost of full Hessians.
You will craft the next generation of explainable defenses, turning saliency signals into protective masks that are both auditable and performance-friendly.
By fusing Grad-CAM++ approximations with learned attention, you will create the first real-time, interpretable masking layer that can be audited by regulators and visualized by operators.
You will architect the most reliable explainability engine, blending perturbation and gradient signals into a consensus that withstands adversarial manipulation.
By aligning perturbation maps with gradient maps via Wasserstein-style alignment, you will create the first attribution method that is provably robust to gradient masking and adversarial perturbations.

Chapter 7: Counterfactual Explanation Robustness to Adversarial Noise

You’ll pioneer a privacy‑aware causal discovery engine that powers adversarially robust counterfactual explanations. Your work will sit at the intersection of causal inference, differential privacy, and adversarial machine learning, enabling trustworthy explanations in multi‑modal, multi‑agent systems.
Building a causal graph that is both statistically sound and privacy‑preserving in high‑dimensional multimodal settings pushes the boundary of what causal discovery can achieve in production adversarial environments.
You’ll architect the next‑generation diffusion engine that projects adversarial perturbations onto the data manifold while respecting causal constraints. Your work will merge deep generative modeling with causal reasoning to produce realistic, actionable counterfactuals across vision, language, and graph domains.
Designing a diffusion model that simultaneously enforces manifold fidelity, causal consistency, and cross‑modal coherence pushes the limits of generative AI in safety‑critical applications.

Lead Multi‑Modal Adversarial Recourse Engineer

Applied ScientistSenior1 position
You’ll build the cross‑modal recourse engine that turns complex, adversarially perturbed inputs into clear, actionable explanations. Your work will fuse vision‑language models, graph reasoning, and medical‑domain standards to deliver recourse that is both robust and clinically usable.
Creating a unified recourse framework that simultaneously handles images, text, and graph data while withstanding prompt‑injection and cross‑modal consistency attacks is an unprecedented challenge at the frontier of explainable AI.

Chapter 8: Misattribution of Blame in Cooperative Multi‑Agent Systems

Lead the frontier of causal inference in high‑stakes multi‑agent systems. You’ll design algorithms that turn noisy, partially observable logs into a principled causal fabric, enabling trustworthy blame signals that survive adversarial manipulation.
You’ll pioneer hybrid Bayesian‑neural causal discovery that blends PC/NOTEARS with graph‑neural‑network priors, achieving online, cycle‑aware learning in non‑stationary environments—a capability that has no commercial precedent.

Staff Counterfactual Policy Evaluation Engineer

Applied ScientistStaff1 position
Architect the next‑generation counterfactual engine that blends causal knowledge with adaptive importance weighting, delivering trustworthy blame scores even in high‑dimensional, non‑stationary bandit settings.
You’ll engineer a continuous adaptive blending (CAB) scheme that learns surrogate policies from logged data, enabling real‑time generation of counterfactual trajectories while maintaining unbiasedness—a novel contribution to offline RL evaluation.

Senior Adversarial Robust Explanation Engineer

Applied ScientistSenior1 position
Build the most resilient explanation engine ever seen in multi‑agent AI, combining state‑of‑the‑art explainers with adversarial training to guarantee that blame signals cannot be gamed by malicious agents or operators.
You’ll develop a novel adversarial‑steered explanation weighting algorithm that jointly optimizes model accuracy and explanation stability, a technique that has never been applied to multi‑agent blame attribution.

Chapter 9: Cascading Misinterpretation and Suboptimal Joint Actions

Principal Graph-Conditioned Explanation Architect

Research ScientistPrincipal1 position
You will architect the heart of our Joint Interpretability‑Trust framework, marrying cutting‑edge graph neural networks with transformer‑augmented LLMs to produce explainable, context‑aware diagnostics for multi‑agent systems. Your work will directly reduce cascading misinterpretation and unlock the trust‑propagation layers that follow.
This role pushes the boundary of explainable AI by integrating multimodal graph transformers and diffusion‑based explanation generation into an asynchronous, distributed agent network—a combination that has never been deployed at scale in production.

Senior Bayesian Trust Propagation Engineer

Research EngineerStaff1 position
You will build the trust backbone of our multi‑agent system, turning abstract Bayesian updates into a robust, low‑latency trust‑propagation service that survives benign noise and active adversaries alike. Your work will be the invisible guard that keeps the entire JIT framework from collapsing.
This role fuses Bayesian trust modeling, blockchain‑based identity validation, and real‑time inference on heterogeneous devices—an unprecedented combination that has never been deployed at scale in multi‑agent AI.

Lead RL Sub‑Optimality Engineer

Research EngineerLead1 position
You will pioneer a provably ε‑optimal joint‑policy engine that marries advanced RL theory with real‑time distributed execution. Your work will ensure that our agents never drift beyond a safety‑guaranteed performance envelope, even under noisy or adversarial conditions.
Combining sample‑complexity‑optimal RL with distributed sub‑optimality bounds and trust‑driven re‑optimization is a novel, uncharted territory that pushes the limits of safety‑critical AI.

Chapter 10: Overfitting of Explainability Models to Benign Data

Lead the end‑to‑end design of a robust, uncertainty‑aware explainability system that can be deployed in safety‑critical, multi‑agent environments. Your work will set the standard for how explanations survive adversarial attacks and evolving data streams.
You will pioneer a joint loss formulation that aligns gradient spaces of predictions and explanations, a novel Bayesian counterfactual sampler that guarantees epistemic coverage, and a real‑time explanation‑drift engine that operates at scale—none of which exist in current commercial pipelines.
Architect a neurosymbolic explanation engine that turns black‑box model reasoning into formally verifiable, human‑readable logic—pushing the frontier of trustworthy AI in safety‑critical applications.
You will create a hybrid system that fuses LLM chain‑of‑thought reasoning with MaxSAT constraint solving, a combination that has never been deployed at scale in production. The engine will guarantee that explanations survive adversarial perturbations and can be audited by regulators.
Architect and ship a privacy‑first federated learning platform that lets agents collaborate on explanations without leaking sensitive data—essential for regulated, multi‑agent deployments.
You will build the first end‑to‑end system that combines federated learning, differential privacy, and explainability in a single pipeline, a combination that has not yet been demonstrated at scale in production.

Chapter 11: Retrieval Unreliability and Knowledge Base Corruption

Lead the design of a cryptographically secure, immutable knowledge‑base that turns every embedding into a verifiable artifact. Your work will become the foundation for trust, compliance, and self‑healing in our multi‑agent AI ecosystem.
You will pioneer the first end‑to‑end cryptographic provenance chain for semantic embeddings, blending cutting‑edge blockchain primitives with vector‑store internals—an uncharted intersection of AI and secure systems.

Staff Adaptive Trust‑Weighted Retrieval Architect

Algorithm DeveloperStaff1 position
Architect the next‑generation retrieval engine that learns to trust the right vectors, dynamically adjusts to query context, and thwarts membership inference and poisoning attacks—all while preserving semantic recall.
You will build a retrieval system that treats trust as a first‑class feature, learning adaptive weighting from live feedback and integrating graph consistency checks—an unprecedented fusion of information retrieval, cryptography, and online learning.
Build the self‑checking heart of our RAG system—an AI critic that can spot hallucinations, request fresh evidence, and keep the agent’s answers grounded in truth.
You will create the first end‑to‑end, low‑latency critic loop that operates at inference time, combining lightweight transformer inference with dynamic retrieval re‑ranking—an uncharted approach to self‑correcting generation.

Chapter 12: Hallucination Amplification in Multi‑Agent Debate

You will architect and ship the evidence engine that keeps the debate honest. From vector‑search backends to policy‑driven query engines, you’ll turn raw knowledge into the fuel that powers every agent’s argument.
This role pushes the boundary of retrieval‑augmented reasoning by marrying LLMs with verifiable knowledge sources in a live, multi‑agent setting—an area where no existing product offers full end‑to‑end, low‑latency guarantees.
You will turn uncertainty into actionable trust. By fusing Bayesian inference with agent performance data, you’ll prevent sycophancy and ensure that the debate’s final verdict is statistically sound.
This role pioneers the first end‑to‑end Bayesian ensemble that operates in a live, multi‑agent debate, blending probabilistic programming with LLM confidence signals—a technique that has no direct precedent in commercial systems.

Principal Provenance & Runtime Governance Architect

Infrastructure EngineerPrincipal1 position
You will build the trust‑engine that turns the debate into a compliant, auditable system. From hash‑chain logs to HITL orchestration, you’ll make sure every decision can be traced and verified.
This role pioneers a runtime, agent‑centric provenance framework that satisfies ISO/IEC 23894 and the EU AI Act—an architectural property that has never been realized in multi‑agent AI systems.

Chapter 13: Adversarial Prompt Injection and Misleading Explanations

Lead the design and deployment of the world’s first real‑time, model‑agnostic observability layer that turns a black‑box LLM into a transparent, auditable engine. You’ll build the hardware‑software bridge that captures every internal state change and publish it to a tamper‑evident ledger, enabling instant detection of deceptive reasoning.
This role pushes the boundary of AI safety by marrying low‑latency hardware instrumentation with blockchain‑based attestation—an area where no production system currently exists. Your work will set a new standard for internal state observability in large‑scale AI systems.

Mechanistic CoT Decomposition & Fidelity Scoring Lead

Algorithm DeveloperPrincipal1 position
Drive the frontier of mechanistic interpretability by turning opaque transformer activations into a faithful, step‑by‑step reasoning graph. You’ll build the engine that not only decomposes CoT but also quantifies how well an explanation reflects the model’s true internal logic.
This role bridges the gap between mechanistic probing and actionable safety metrics—an area that has only recently emerged in academia. By quantifying explanation fidelity at scale, you’ll create the first production‑ready tool that can detect deceptive reasoning even when the final answer appears benign.

Continuous Adversarial Feedback Loop RL Engineer

Applied ScientistSenior1 position
Lead the design of a self‑reinforcing safety loop that turns every detected deception into a learning signal for the model. You’ll build the RL controller that continuously adapts the safety reward, ensuring the LLM remains trustworthy even as attackers evolve.
This role pioneers safety‑RL at the scale of billions of parameters, integrating internal confidence signals (e.g., low‑entropy refusals) into a dynamic reward model—an approach that has only recently been demonstrated in research but never deployed in production.

Chapter 14: Communication Graph Vulnerability to Malicious Agents

You will architect and deliver the world’s first zero‑trust consensus protocol that runs on commodity MQTT brokers, marrying cryptographic attestation with graph‑aware trust weighting. Your work will enable multi‑agent systems to reach agreement even when a fraction of the network is actively compromised.
This role pioneers the fusion of formal cryptographic guarantees with adaptive graph‑aware consensus, a combination that has never been realized at scale in edge‑deployed MAS.
You will build the first real‑time, submodular‑optimization‑driven graph evolution engine for multi‑agent systems, enabling them to reconfigure their topology on the fly in response to attacks or failures.
This role pushes the frontier by applying submodular optimization—traditionally a static, offline technique—to the dynamic, distributed setting of edge‑deployed MAS.
You will pioneer the first end‑to‑end, formally verified LRC system that runs on 32‑bit microcontrollers, enabling agents to autonomously certify their local robustness and isolate malicious code before it propagates.
This role blends formal verification, randomized smoothing, and secure enclave design into a single, deployable package—an unprecedented combination for edge‑deployed MAS.

Chapter 15: Adaptive Multi‑Agent Defense Against Adversarial Coordination

Lead the design and deployment of the most advanced adversarial training loop for multi‑agent systems. You’ll build the evolutionary attacker generator, craft dynamic role policies, and ensure the resulting agents converge under Byzantine threat models while remaining interpretable.
This role pioneers the first end‑to‑end, evolutionary‑driven multi‑agent training framework that adapts roles in real time, a capability that has never been demonstrated at scale in hostile environments.
Architect and ship a resilient federated learning platform that blends anomaly detection, reputation scoring, and cryptographic aggregation to thwart poisoning attacks in real time.
This role implements the first sub‑linear, privacy‑preserving aggregation scheme that combines geometric anomaly detection with dynamic reputation vectors, a breakthrough for secure, large‑scale multi‑agent learning.
Lead the creation of a trust‑aware perception engine and a statistically‑certified LLM smoothing layer, ensuring that every sensor reading and language output is vetted for integrity before influencing collective decisions.
This role fuses Dirichlet‑based trust distributions with ray‑tracing‑derived dynamic FOV, and introduces the first randomized smoothing scheme for LLM agents that provides a certified radius against adversarial hallucinations.

Build What Comes Next

If you’re ready to build the next generation of secure, privacy‑preserving AI that operates at scale, bring your expertise to Corpora.ai. Apply now and join a team that turns bold ideas into world‑changing technology.