Architect and ship a privacy‑first federated learning platform that lets agents collaborate on explanations without leaking sensitive data—essential for regulated, multi‑agent deployments.
You will build the first end‑to‑end system that combines federated learning, differential privacy, and explainability in a single pipeline, a combination that has not yet been demonstrated at scale in production.
Federated Explainability with Differential Privacy (FED‑EXP)
From: Overfitting of Explainability Models to Benign Data
FED‑EXP requires a robust, privacy‑preserving federated learning infrastructure that can share explanation gradients while respecting differential privacy budgets. This role will design and implement the end‑to‑end pipeline, ensuring compliance, scalability, and low‑latency inference.
A federated training framework that injects DP noise into explanation gradients, a secure aggregation protocol, and an explanation‑serving API that delivers SHAP/LIME maps without exposing raw data. The deliverables include a privacy‑budget manager, a monitoring dashboard, and a production‑grade deployment on Kubernetes.
Master’s or PhD in Computer Science, Electrical Engineering, or a related field with a focus on privacy or distributed systems.
Within 12 months, launch a federated explainability platform that achieves <0.1% privacy leakage, supports 100+ client nodes, and delivers SHAP explanations with <200 ms latency, enabling compliance‑ready deployments in finance or healthcare.
Lead the expansion of the privacy‑first platform to multi‑tenant, cross‑border deployments, and mentor a growing team of platform engineers.
If this sounds like the challenge you have been looking for, we want to hear from you. We value what you can build over where you have been.