← Back to All Openings

Lead Neurosymbolic Engineer – Symbolic Structured Explanation Modules

corpora-jobs-1778796293285-db9d41c6 - Frontier Development
Applied ScientistLead1 position

Why This Role is Different

Frontier Development Role

Architect a neurosymbolic explanation engine that turns black‑box model reasoning into formally verifiable, human‑readable logic—pushing the frontier of trustworthy AI in safety‑critical applications.

The Frontier Element

You will create a hybrid system that fuses LLM chain‑of‑thought reasoning with MaxSAT constraint solving, a combination that has never been deployed at scale in production. The engine will guarantee that explanations survive adversarial perturbations and can be audited by regulators.

🔬

Project Context

Research Area

Symbolic‑Structured Explanation Modules (SSEM)

From: Overfitting of Explainability Models to Benign Data

Why This Role is Critical

SSEM bridges LLM‑generated explanations with formal logic, ensuring that predicates remain valid under perturbations. This role requires expertise in natural‑language processing, symbolic reasoning, and constraint solving to build a lightweight engine that can be embedded in real‑time agents.

What You Will Build

A quasi‑symbolic CoT extractor that maps LLM outputs to human‑readable predicates, a MaxSAT‑based constraint solver that enforces logical consistency, and an end‑to‑end pipeline that generates audit‑ready, verifiable explanations for multi‑agent systems.

🛠

Key Responsibilities

  • Design a quasi‑symbolic abstraction layer that extracts predicates from LLM explanations while preserving semantic fidelity.
  • Implement a lightweight MaxSAT solver integration that checks logical consistency across agents and under perturbations.
  • Develop a spatio‑temporal concept decoder that maps continuous sensor data to first‑order predicates for robotics or autonomous driving.
  • Create a formal verification suite to evaluate explanation correctness and robustness.
  • Collaborate with the IAT team to embed symbolic constraints into the joint loss function.
🎯

Required Skills & Experience

Technical Must-Haves

LLM fine‑tuning (OpenAI API, Hugging Face)

Expert
Generate quasi‑symbolic chain‑of‑thought explanations.

Symbolic logic and constraint solving (MaxSAT, Prolog)

Expert
Implement predicate consistency checks.

Knowledge graph construction and querying (Neo4j, RDF)

Advanced
Store and retrieve domain predicates.

Formal verification tools (Coq, Isabelle)

Proficient
Validate explanation proofs.

Computer vision pipelines (YOLO, EfficientNet)

Proficient
Ground perceptual inputs into symbolic predicates.

Experience Requirements

  • 4+ years in neurosymbolic AI or formal methods.
  • Publications on symbolic reasoning or LLM interpretability.
  • Experience building production‑grade NLP pipelines.

Education

PhD in Artificial Intelligence, Computer Science, or a related field with a focus on symbolic reasoning or NLP.

Preferred Skills

  • Experience with robotics or autonomous driving perception stacks.
  • Knowledge of regulatory frameworks for explainability (EU AI Act).
🤝

You Will Thrive Here If...

  • Thrives in high‑autonomy environments where experimentation is rewarded.
  • Shows a passion for bridging theory and practice, especially in formal verification.
  • Able to communicate complex logical concepts to non‑technical stakeholders.
📈

Impact & Growth

12-Month Impact

Deliver a verifiable explanation engine that achieves <3% logical inconsistency under adversarial perturbations, enabling audit‑ready deployments in autonomous vehicles or medical diagnosis within a year.

Growth Opportunity

Scale the neurosymbolic framework to new domains (e.g., finance, energy), mentor a team of symbolic engineers, and shape the company’s research roadmap.

Ready to Push the Boundaries?

If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.