Regulated sectors such as healthcare, finance, autonomous vehicles, and any domain where AI decisions must be auditable and actionable.
Unreliable explanations can trigger regulatory fines, loss of user trust, and catastrophic decision errors.
FCA first learns a causal graph (or accepts an expert‑defined one), then projects candidate counterfactuals onto the data manifold via a DDPM. Candidate counterfactuals are generated with MARM, ensuring cross‑modal consistency, and finally optimized by RO‑Lp to minimise action cost while keeping model change within an Lp budget. A robustness oracle simulates adversarial model variants to validate CE stability.
IP
24 months
5
The combination of causal steering, diffusion‑based manifold projection, multi‑modal recourse, and Lp‑bounded optimisation is a tightly coupled algorithmic stack that is difficult to replicate without deep expertise in causal inference, generative modelling, and robust optimisation.
Regulated AI audit and compliance platforms for healthcare, finance, autonomous vehicles, and public sector decision‑making.
Enterprise AI governance suites, Explainable‑AI (XAI) SaaS for consumer apps
The global AI explainability market is projected to exceed $5 B by 2030, with the regulated sub‑segment alone representing >$1 B in annual spend on audit, compliance, and risk mitigation tools.
EU AI Act, US AI safety mandates, and rising litigation risk have accelerated demand for provably robust explanations. Recent breakthroughs in diffusion models and causal discovery make FCA’s technical stack commercially viable now.
The work addresses fundamental scientific questions in causal inference, generative modelling, and robust optimisation, and has clear societal impact in regulated domains.
A working prototype can be demonstrated on open datasets (e.g., MIMIC‑III, COMPAS) and offers a clear SaaS revenue path for audit firms and fintechs.
Series A will fund scaling the diffusion backbones, building a cloud‑native API, and expanding the multi‑modal library to cover NLP, graph, and time‑series data, positioning the company as the go‑to platform for robust counterfactual audit.
Integrate automated causal discovery with expert‑in‑the‑loop validation and use counterfactual consistency checks to flag anomalies.
Leverage fast samplers (DDIM, DPM‑Solver) and transfer‑learning from publicly available checkpoints; offer a lightweight inference engine.
Continuously update the robustness oracle with new attack families and employ adversarial training of the steering module.
Engage with standard‑setting bodies early and align FCA outputs with emerging frameworks (e.g., EU AI Act, ISO/IEC 42001).