Modelling is essential to explore high‑dimensional configuration spaces, evaluate robustness under diverse adversarial scenarios, and guide hyper‑heuristic orchestration before costly physical deployment, thereby reducing risk, informing design decisions, and ensuring compliance.
Modelling is essential to explore high‑dimensional configuration spaces, evaluate robustness under diverse adversarial scenarios, and guide hyper‑heuristic orchestration before costly physical deployment, thereby reducing risk, informing design decisions, and ensuring compliance.
This project develops a suite of resilient multi‑agent AI components—AOI‑GBE, TAFA, HTMAD, FCA, RACE, etc.—to enable trustworthy coordination in adversarial environments across autonomous fleets, edge IoT, and cyber‑physical systems.
Not provided.
Not provided.
Not provided.
Each row links to the full modelling brief for that task.
| # | Task |
|---|---|
| 1 | Generate realistic synthetic datasets of sensor observations with controlled adversarial perturbations using simulation and GAN-based augmentation to evaluate detection and inference pipelines before hardware deployment. Monte CarloSimulationGANFeasibility |
| 2 | Hierarchical Bayesian inference with variational Monte Carlo to quantify policy uncertainty under noisy observations before deployment. Bayesian InferenceMonte CarloVariational InferenceFeasibilityafter 1 |
| 3 | Generate semantic adversarial scenarios with LLMs and quantify policy regret via RL loops to refine curriculum safety thresholds. LLM SimulationReinforcement LearningMonte CarloHyper‑heuristic OptimizationFeasibilityafter 1 |
| 4 | A virtual testbed that blends Bayesian trust scoring, differential privacy, ZK‑proofs, and quantum‑inspired weighting to quantify robustness, overhead, and privacy in a multi‑agent federated learning environment. Discrete‑Event Simulation (SimPy)Bayesian Trust Scoring (PyMC3)Differential Privacy (Opacus / TensorFlow‑Privacy)Zero‑Knowledge Proof Generation (libsnark)Blockchain Ledger (Hyperledger Fabric)Quantum‑Inspired Weighting (Grover‑style amplitude amplification simulation)Hyper‑Heuristic Hyperparameter Orchestration (Optuna + Thompson Sampling)Feasibility |
| 5 | Quantify local robustness and evaluate consensus protocols in adversarial graph environments using graph‑theoretic simulation and submodular optimisation. Graph TheoryMonte Carlo SimulationSubmodular OptimizationHyper‑Heuristic OrchestrationFeasibility |
| 6 | Build a causal‑graph discovery and diffusion‑based manifold projection pipeline to generate counterfactual explanations and evaluate their robustness against adversarial perturbations in a simulated environment. Causal DiscoveryDiffusion ModelsMonte Carlo SimulationAdversarial TestingFeasibility |
| 7 | Quantify the trade‑off between token‑budgeted chain‑of‑thought, uncertainty‑driven budgets, and LLM counterfactual rewards using Bayesian optimisation and Monte‑Carlo simulation. Bayesian OptimisationMonte Carlo SimulationMulti‑Agent Reinforcement Learning (MARL)LLM‑Driven Counterfactual GenerationFeasibility |
| 8 | Simulate the RACE architecture to evaluate Byzantine resilience, dynamic trust, and runtime explainability under adversarial coordination. SimulationReinforcement LearningBayesian OptimisationFeasibility |
| 9 | Dynamic, data‑driven selection of low‑level search heuristics to accelerate robust policy inference under adversarial observation perturbations. Hyper‑heuristicReinforcement LearningBayesian OptimizationFeasibilityafter 1after 2after 3 |
| 10 | Automated, multi‑objective tuning of trust, privacy, and quantum‑weighting in a federated multi‑agent system. Hyper‑heuristicMulti‑objective optimisationBayesian optimisationFeasibilityafter 4 |
| 11 | An adaptive hyper‑heuristic that selects submodular optimisation, trust thresholds, and consensus protocols in real‑time to maximise graph robustness under dynamic adversarial conditions. Hyper‑heuristicSubmodular OptimizationOnline LearningSimulationGraph TheoryFeasibilityafter 5 |