Architect a neurosymbolic explanation engine that turns black‑box model reasoning into formally verifiable, human‑readable logic—pushing the frontier of trustworthy AI in safety‑critical applications.
You will create a hybrid system that fuses LLM chain‑of‑thought reasoning with MaxSAT constraint solving, a combination that has never been deployed at scale in production. The engine will guarantee that explanations survive adversarial perturbations and can be audited by regulators.
Symbolic‑Structured Explanation Modules (SSEM)
From: Overfitting of Explainability Models to Benign Data
SSEM bridges LLM‑generated explanations with formal logic, ensuring that predicates remain valid under perturbations. This role requires expertise in natural‑language processing, symbolic reasoning, and constraint solving to build a lightweight engine that can be embedded in real‑time agents.
A quasi‑symbolic CoT extractor that maps LLM outputs to human‑readable predicates, a MaxSAT‑based constraint solver that enforces logical consistency, and an end‑to‑end pipeline that generates audit‑ready, verifiable explanations for multi‑agent systems.
PhD in Artificial Intelligence, Computer Science, or a related field with a focus on symbolic reasoning or NLP.
Deliver a verifiable explanation engine that achieves <3% logical inconsistency under adversarial perturbations, enabling audit‑ready deployments in autonomous vehicles or medical diagnosis within a year.
Scale the neurosymbolic framework to new domains (e.g., finance, energy), mentor a team of symbolic engineers, and shape the company’s research roadmap.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.