Lead the design and deployment of the world’s first real‑time, model‑agnostic observability layer that turns a black‑box LLM into a transparent, auditable engine. You’ll build the hardware‑software bridge that captures every internal state change and publish it to a tamper‑evident ledger, enabling instant detection of deceptive reasoning.
This role pushes the boundary of AI safety by marrying low‑latency hardware instrumentation with blockchain‑based attestation—an area where no production system currently exists. Your work will set a new standard for internal state observability in large‑scale AI systems.
Ground-Truth Observability Layer (GLO) and Multi-Agent Verification Protocol (MAVP)
From: Adversarial Prompt Injection and Misleading Explanations
The GLO must capture every internal state change of a closed‑source LLM in real time, while MAVP requires a tamper‑evident ledger and hardware attestation. Both demand a unified, ultra‑low‑latency, hardware‑centric system that can operate outside the model’s inference loop.
A dedicated sensor stack (GPU/TPU hooks, kernel‑level hooks, or custom ASIC) that streams attention maps, token embeddings, and logits to a distributed ledger; a Merkle‑tree blockchain with cryptographic signatures; and an end‑to‑end monitoring dashboard that flags state divergences within milliseconds.
PhD or Master’s in Computer Engineering, Systems, or a related field with a focus on real‑time systems or hardware security.
Within 12 months, deliver a fully operational GLO that captures >99% of internal state changes with <10 ms latency, and a MAVP ledger that can detect and flag deceptive explanation fragments in real time, reducing false‑negative jailbreaks by >90%.
Lead the expansion of the observability stack to multimodal models, integrate with downstream safety modules, and shape the company’s AI‑trustability platform.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.