← Back to Patent Index

Trust‑Aware Federated Aggregation in Multi‑Agent Settings

Project: corpora-patent-1778797329336-d1df8c8b

Contents

Draft Patent Application 2 — For Review

Trust‑Aware Federated Aggregation in Multi‑Agent Settings

TITLE OF THE INVENTION

Trust‑Aware Federated Aggregation Architecture for Multi‑Agent Systems

FIELD OF THE INVENTION

The present invention relates to distributed machine learning, specifically to federated learning systems that incorporate dynamic trust assessment, adaptive differential privacy, blockchain‑based auditability, and quantum‑resilient aggregation for heterogeneous multi‑agent networks such as UAV fleets, IoT edge nodes, autonomous vehicles, and industrial cyber‑physical systems.

BACKGROUND AND PRIOR ART

Federated learning (FL) enables collaborative model training without sharing raw data, yet remains vulnerable to data‑poisoning, Byzantine, and targeted adversarial updates. Conventional defenses such as simple averaging or static robust statistics (e.g., trimmed mean) are insufficient against coordinated attacks and adaptive poisoning [v7136], [v16338]. Recent work introduces reputation‑based client selection and adaptive weighting to mitigate malicious influence [v15154], [v12128]. However, these approaches lack end‑to‑end privacy enforcement, verifiable aggregation, and transparent audit trails required by emerging regulations such as the EU AI Act and ISO/IEC 42001. Adaptive differential privacy (DP) schemes that scale noise by client reputation have been proposed, yet they do not provide verifiable compliance proofs [v12800], [v12837]. Blockchain‑enabled trust ledgers offer tamper‑resistant audit trails, but prior art does not integrate them with reputation engines, adaptive DP, and quantum‑resilient aggregation [v9402], [v13219]. Consequently, there remains a technical problem of providing a unified, trust‑aware federated aggregation framework that simultaneously guarantees robustness, privacy, auditability, and scalability in adversarial, resource‑constrained multi‑agent settings.

SUMMARY OF THE INVENTION

The invention discloses a Trust‑Adaptive Federated Aggregation (TAFA) architecture that unifies a Multi‑Dimensional Reputation Engine (MDRE), an Adaptive Differential Privacy Layer (ADPL), a Blockchain‑Enabled Trust Ledger (BLTL), a Quantum‑Resilient Aggregation Core (QRAC), a Federated Graph Contrastive Learning Module (FGCLM), and a Zero‑Shot Policy Transfer module (ZSTTM). MDRE computes a continuous reputation vector from statistical consistency, temporal behavior, content similarity, and cryptographic attestations, and applies Bayesian thresholding for soft exclusion. ADPL scales DP noise inversely with reputation and emits zero‑knowledge proofs of compliance. BLTL records reputation scores, update hashes, and ZKP commitments on a lightweight smart‑contract chain, providing immutable auditability and token‑based governance. QRAC employs Grover‑style amplitude amplification and entanglement checks to prioritize trustworthy updates and thwart quantum‑adversaries. FGCLM aggregates contrastive loss vectors weighted by reputation, reducing communication overhead and mitigating malicious graph structures. ZSTTM aggregates policies using a Bayesian trust metric while balancing explainability and performance. The integrated pipeline delivers robust, privacy‑preserving, auditable, and scalable federated learning for heterogeneous multi‑agent networks.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

Embodiment 1 – Multi‑Dimensional Reputation Engine (MDRE)
MDRE computes a reputation vector R = (Rstat, Rtemp, Rsim, Rcrypto) for each client. Rstat derives from gradient norms and loss variance; Rtemp is an exponential moving average (EMA) of per‑round quality; Rsim is the cosine similarity between the local update and the current global model; Rcrypto verifies signed update signatures. Bayesian inference updates each dimension’s posterior probability, and a dynamic threshold τ is recalculated using a Bayesian update rule that incorporates recent convergence speed and detected attack intensity [12][15]. Soft exclusion weights each update by a continuous reputation score, enabling graceful degradation and re‑inclusion of previously penalized clients [11].

Embodiment 2 – Adaptive Differential Privacy Layer (ADPL)
ADPL modulates the DP noise scale σ by the reputation score: σ = σmax · (1 – Ravg), where Ravg is the mean of the reputation dimensions. High‑trust clients receive lower noise, improving utility, while low‑trust clients receive stronger protection [16]. Each client generates a zero‑knowledge proof (ZKP) that the applied noise satisfies the privacy budget without revealing the budget itself [13]. The ZKP is a recursive proof that can be verified on the blockchain ledger.

Embodiment 3 – Blockchain‑Enabled Trust Ledger (BLTL)
BLTL records, in a lightweight smart‑contract chain, the reputation vector, the hash of each client update, and the ZKP commitment. The ledger is immutable and tamper‑resistant, providing an external audit point for regulators [13]. Clients stake governance tokens proportional to their historical reputation; malicious behavior drains the stake, providing an economic deterrent [17].

Embodiment 4 – Quantum‑Resilient Aggregation Core (QRAC)
QRAC applies a Grover‑style amplitude amplification operator to each client’s update vector, prioritizing updates with higher inner‑product similarity to the global model. The weighting factor wi = 1 + α·cos(θi), where θi is the angle between client i’s update and the global model, and α is a tunable amplification parameter [10]. For quantum‑capable nodes, entangled qubits are used to jointly verify that all participants observe the same global state; a sudden drop in purity triggers a rollback [18].

Embodiment 5 – Federated Graph Contrastive Learning Module (FGCLM)
Clients construct local graph embeddings of multimodal data (video, temperature, network traffic) and compute a contrastive loss vector Li. Only Li and a prototype vector are transmitted, reducing payload. Aggregation weights Li by the reputation score, mitigating over‑fitting to malicious graph structures [19][20].

Embodiment 6 – Zero‑Shot Policy Transfer with Trust Metrics (ZSTTM)
In multi‑agent reinforcement learning, each agent’s policy update is weighted by a Bayesian trust metric τi derived from MDRE. An explainability controller allocates a budget between fidelity of explanations and policy performance, ensuring regulatory compliance without sacrificing effectiveness [21].

Overall Pipeline
Clients train locally, compute reputation features, apply context‑aware DP, generate ZKPs, and submit updates to the aggregation core. The core aggregates, updates reputation, records proofs on the blockchain, and disseminates the new global model. The system is communication‑efficient (sparsification, prototype sharing), scalable (sharded ledger), and resilient to classical and quantum adversaries.

CLAIMS

1. A method for trust‑aware federated aggregation in a multi‑agent network, comprising: receiving local model updates from a plurality of agents; computing a multi‑dimensional reputation vector for each agent based on statistical consistency, temporal behavior, content similarity, and cryptographic attestations; applying a Bayesian threshold to determine a continuous reputation score; scaling differential privacy noise inversely proportional to the reputation score; generating a zero‑knowledge proof that the noise scale complies with the privacy budget; recording the reputation score, update hash, and proof on a blockchain ledger; aggregating the noise‑scaled updates weighted by the reputation scores to produce a global model; updating the reputation vectors based on the aggregated result; and disseminating the updated global model to the agents. The method further includes quantum‑inspired weighting of updates and entanglement consistency checks [12][10][13].

2. A system for trust‑aware federated aggregation, comprising: a client interface for local training and update submission; a reputation engine that computes a multi‑dimensional reputation vector; a differential privacy module that scales noise based on reputation; a zero‑knowledge proof generator; a blockchain ledger for recording reputation scores, update hashes, and proofs; an aggregation core that applies quantum‑inspired weighting and entanglement checks; and a global model distributor. The system further includes a governance token staking mechanism and a federated graph contrastive learning module [12][10][13][17].

3. The method of claim 1, wherein the reputation vector includes a cosine similarity component between the local update and the current global model [19].

4. The method of claim 1, wherein the Bayesian threshold is updated using a Bayesian update rule that incorporates recent convergence speed and detected attack intensity [12][15].

5. The method of claim 1, wherein the differential privacy noise scale is modulated by the reputation score such that higher reputation yields lower noise [16].

6. The method of claim 1, wherein the zero‑knowledge proof is a recursive ZKP that proves compliance with the noise budget without revealing the budget itself [13].

7. The method of claim 1, wherein the blockchain ledger is a lightweight smart‑contract chain that records reputation scores, update hashes, and ZKP commitments [13].

8. The method of claim 1, further comprising a quantum‑inspired weighting scheme based on Grover amplitude amplification to prioritize updates with higher inner‑product similarity to the global model [10].

9. The method of claim 1, further comprising an entanglement‑based consistency check for quantum‑capable nodes [18].

10. The method of claim 1, wherein the aggregation step includes a soft exclusion mechanism that weights updates by a continuous reputation score rather than hard dropping [11].

11. The method of claim 1, wherein the aggregation includes a federated graph contrastive learning module that aggregates contrastive loss vectors weighted by reputation scores [19][20].

12. The method of claim 1, wherein the aggregation includes a zero‑shot policy transfer module that aggregates policies using a Bayesian trust metric [21].

13. The method of claim 1, wherein the system includes a governance token that clients stake proportional to historical reputation, and malicious behavior drains the stake [17].

14. The method of claim 1, wherein the system is deployed in a heterogeneous multi‑agent network comprising UAVs, IoT nodes, autonomous vehicles, and industrial cyber‑physical systems [12].

15. The method of claim 1, wherein the system achieves communication efficiency by transmitting only contrastive loss vectors and prototype embeddings [19][20].

16. The method of claim 1, wherein the system provides interpretability by exposing the reputation vector and the rationale for weighting to human operators [13][21].

17. The method of claim 1, wherein the system includes a zero‑knowledge proof audit trail that can be inspected by regulators [v14162][v5668].

ABSTRACT

A trust‑aware federated aggregation architecture (TAFA) for heterogeneous multi‑agent networks integrates a multi‑dimensional reputation engine, adaptive differential privacy, blockchain‑based auditability, quantum‑resilient weighting, federated graph contrastive learning, and zero‑shot policy transfer. Clients compute reputation features, apply reputation‑scaled DP noise, generate zero‑knowledge proofs, and submit updates to a blockchain ledger. The aggregation core applies Bayesian thresholding, soft exclusion, Grover‑style amplitude amplification, and entanglement checks to produce a robust global model. The system achieves high resilience to poisoning and Byzantine attacks, preserves privacy with verifiable DP, provides immutable audit trails, and maintains communication efficiency through prototype sharing. TAFA satisfies emerging regulatory requirements for interpretability and auditability while enabling scalable, secure, and trustworthy collaborative AI across UAV fleets, IoT nodes, autonomous vehicles, and industrial cyber‑physical systems.

References — Cited Sources

Appendix: Cited Sources

1
Is AI secretly learning from you? The unseen power of federated learning 2025-04-01
Federated learning design: How federated learning can be applied in decentralized environments. Implementation challenges: Combating data traffic jams, delay issues, and security risks. Advanced model aggregation: How to combine many devices' contributions without compromising accuracy. Security measures: How to prevent attacks, data poisoning, and adversarial risks....
2
Targeted Adversarial Poisoning Attack Against Robust Aggregation in Federated Learning for Smart Grids 2026-02-28
To counter these threats, secure aggregation rules have been implemented to reduce the impact of adversarial or malicious updates during training process. In this paper, we first propose a norm-based aggregation rule specifically designed to mitigate the effects of poisoning attacks within federated learning systems used for power quality classification....
3
Secure and Private Federated Learning: Achieving Adversarial Resilience through Robust Aggregation 2025-06-04
Abstract: Federated Learning (FL) enables collaborative machine learning across decentralized data sources without sharing raw data. It offers a promising approach to privacy-preserving AI. However, FL remains vulnerable to adversarial threats from malicious participants, referred to as Byzantine clients, who can send misleading updates to corrupt the global model. Traditional aggregation methods, such as simple averaging, are not robust to such attacks....
4
A robust and verifiable federated learning framework for preventing data poisonous threats in e-health 2026-03-16
The experimental evaluation indicates that integrating anomaly detection with robust aggregation significantly reduces the impact of poisoning attacks on the global model. In addition, the blockchain logging layer enables transparent tracking of model updates while introducing only limited overhead. Overall, the proposed framework maintains stable model performance even in the presence of adversarial participants. The results suggest that combining defensive learning strategies with transparent ...
5
Engineering Secure, Scalable, and Responsible Intelligence for Real Applications 2026-04-20
Other attack types target the training process like data poisoning can bias a model or quietly insert backdoors that remain dormant until a specific trigger is present (Liu et al. in Trojaning attack on neural networks. NDSS ). Model extraction, or "stealing," allows adversaries to recreate proprietary models by querying APIs, as shown in cloud-based attacks. Privacy is also at stake like membership inference and model inversion can reveal whether a person's data was part of training or even rec...
6
The remarkable growth and adoption of machine learning models have brought along an uncomfortable reality: these systems can be manipulated, deceived, and corrupted by adversarial inputs. 2026-04-18
Another line of defenses includes detection mechanisms - identifying when an input is suspiciously adversarial. In practice, though, detection often lags behind sophisticated new attacks. For model poisoning, robust aggregation rules can mitigate malicious updates in federated learning scenarios (where partial updates from multiple participants are combined)....
7
Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning 2025-12-31
These vulnerabilities highlight an urgent need for the development of defense mechanisms specifically tailored for sparsified FL, ensuring that communication efficiency achieved through sparsification does not compromise the system's robustness against adversarial threats. In this work, we systematically investigate the vulnerabilities of FL under poisoning attacks in the context of sparsified communication-efficient FL.Our analysis demonstrates that existing defense mechanisms, originally desig...
8
UAH Rotorcraft Systems Engineering and Simulation Center (RSESC) demonstrating capabilities during Huntsville UAH & C-UAS Test Range User Expo 2025. 2026-04-23
"In simple terms, multi-modal federated learning lets a group of drones 'learn together' without sending all their raw data to a single server," Nguyen explains. ""Each UAV may collect different types of data - for instance, video, temperature or network signals - to train a small local model on its own data, and shares only model updates rather than the original data. These updates are combined to improve a shared global model. This ultimately improves the resilience and reliability of distribu...
9
From privacy to trust in the agentic era: a taxonomy of challenges in trustworthy federated learning through the lens of trust report 2.0 2026-05-07
This federated inference process introduces a novel problem for human oversight, creating a "double black box" problem: both the individual client outputs and their subsequent aggregation remain opaque. To our best knowledge, there is no known research that specifically addresses this scenario or proposes mechanisms to enhance human decision-making in such contexts. Requirement 2: Technical robustness and safety The second requirement of TAI, technical robustness and safety , refers to the syste...
10
RobQFL: Robust Quantum Federated Learning in Adversarial Environment 2025-09-04
Federated models in sensitive applications such as autonomous vehicles and cybersecurity face threats from poisoning attacks and Byzantine failures. Solutions like quantum-behaved particle swarm optimization for vehicular networks and quantum-inspired federated averaging for cyberattack detection have demonstrated partial resilience. Moreover, Byzantine fault tolerance in QFL has been studied through adaptations of classical approaches . However, the vulnerability of QFL models to evasion attack...
11
Hybrid Reputation Aggregation: A Robust Defense Mechanism for Adversarial Federated Learning in 5G and Edge Network Environments 2025-12-17
We implement HRA in a standard FL framework and evaluate it under a variety of adversarial conditions.Our experiments involve a proprietary 5G network dataset containing over 3 million data records, which simulates a realistic edge federated learning scenario with non-IID data across hundreds of clients.We test HRA against strong attackers employing Sybil strategies (multiple colluding adversaries), targeted model poisoning (label flips and backdoors), and untargeted random-noise attacks. Experi...
12
FLARE: Adaptive Multi-Dimensional Reputation for Robust Client Reliability in Federated Learning 2026-05-13
Abstract: Federated learning (FL) enables collaborative model training while preserving data privacy. However, it remains vulnerable to malicious clients who compromise model integrity through Byzantine attacks, data poisoning, or adaptive adversarial behaviors. Existing defense mechanisms rely on static thresholds and binary classification, failing to adapt to evolving client behaviors in real-world deployments. We propose FLARE, an adaptive reputation-based framework that transforms client rel...
13
DSFL: A Dual-Server Byzantine-Resilient Federated Learning Framework via Group-Based Secure Aggregation 2025-09-09
Specifically, our approach DSFL, introduces a secure, modular secret-sharing scheme and a trust-aware, groupbased aggregation mechanism. These additions reduce collusion risk and strengthen both privacy and robustness under adversarial conditions while maintaining low computational and communication overhead, making it particularly suited for edge-based FL deployments. As shown in our evaluations, DSFL outperforms existing schemes across multiple dimensions-privacy, Byzantine tolerance, and scal...
14
ZTFed-MAS2S: A Zero-Trust Federated Learning Framework with Verifiable Privacy and Trust-Aware Aggregation for Wind Power Data Imputation 2025-08-23
1) The ZTFed framework integrates verifiable Differential Privacy with Non-Interactive Zero-Knowledge Proofs (DP-NIZK) and a Confidentiality and Integrity Verification (CIV) mechanism to enable verifiable privacy preservation and secure, integrity-assured model transmission. In addition, it employs a Dynamic Trust-Aware Aggregation (DTAA) mechanism to enhance resilience against anomalous clients and incorporates sparsity-and quantization-based compression to reduce communication overhead. 2) The...
15
Trust Aware Federated Learning for Secure Bone Healing Stage Interpretation in e-Health 2026-02-26
The framework employs a multi-layer perceptron model trained across simulated clients using the Flower FL framework. The proposed approach integrates an Adaptive Trust Score Scaling and Filtering (ATSSSF) mechanism with exponential moving average (EMA) smoothing to assess, validate and filter client contributions.Two trust score smoothing strategies have been investigated, one with a fixed factor and another that adapts according to trust score variability. Clients with low trust are excluded fr...
16
Differential privacy has become the gold standard for protecting individual data in analytics and machine learning, but it still relies on outdated assumptions about how people trust one another. 2026-01-24
By tailoring privacy guarantees to each user's local trust environment, TGDP can offer higher utility than local DP while maintaining more realistic privacy boundaries than central DP. It reflects a philosophical shift as much as a technical one: from privacy as a global policy to privacy as a networked, context-aware contract. How Trust Affects Accuracy In TGDP, privacy is tied to trust, but so is performance. The more people you trust (and who trust each other), the more accurately you can com...
17
EdgeGuard-AI: Zero-Trust and Load-Aware Federated Scheduling for Secure and Low-Latency IoT Edge Networks 2026-03-22
EdgeGuard-AI significantly reduces unsafe assignments because trust and risk constraints in Equation (12) directly filter candidate nodes before optimization. Table 10 shows that EdgeGuard-AI supports a controllable security - performance balance through the trust threshold. This behavior follows directly from the constrained formulation in Equation (12). Figure 2 shows that EdgeGuard-AI maintains stable latency during high-rate attack bursts. Methods without trust-aware filtering continue to as...
18
Methods, Systems, And Procedures For Quantum Secure Ecosystems 2026-05-06
A non-transitory machine-readable medium, comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations for providing crypto-agile connectivity, the operations comprising: accessing first encryption information from a first communication orchestrator of a first protected environment and second encryption information from a second communication orchestrator of a second protected environment; updating an encryption techniq...
19
Novel Federated Graph Contrastive Learning for IoMT Security: Protecting Data Poisoning and Inference Attacks 2026-01-22
This study presented FedGCL, a secure federated learning framework for IoMT that integrates contrastive graph representation learning, fairness-aware aggregation, and TEE-based secure aggregation. Experimental results on four benchmark datasets demonstrate that FedGCL converges 45% faster than FedAvg - achieving 98.9% accuracy by round 20 - with only ~10% additional overhead. These findings confirm FedGCL's potential as an efficient and privacy-preserving solution for real-world IoMT deployments...
20
Edge-free but Structure-aware: Prototype-Guided Knowledge Distillation from GNNs to MLPs 2025-12-31
Nonetheless, graph structure may be unavailable for some scenarios, e.g., in federated graph learning. In this work, we show it is possible to effectively distill the graph structural knowledge from GNNs to MLPs under an edge-free setting. Prototype in GNNs Prototypical Networks (Snell et al., 2017) have been widely applied in few-shot learning and metric learning on classification tasks (Huang and Zitnik, 2020). The basic idea is that there exists an embedding in which points cluster around a s...
21
Zero-Shot Policy Transfer in Multi-Agent Reinforcement Learning via Trusted Federated Explainability 2026-02-27
This paper proposes TFX-MARL (Trusted Federated Ex-plainability for MARL), a governance-inspired framework for zero-shot policy transfer across silos using trust metric-based federated learning (FL) and explainability controls. TFX-MARL contributes: (i) a trust metric that quantifies participant integrity and accountability using provenance, update consistency, local evaluation reliability, and safety-compliance signals; (ii) a trust-aware federated aggregation protocol that reduces poisoning ri...
22
The introduction of BadUnlearn highlights a previously unaddressed security risk, demonstrating that FU alone is not a guaranteed solution to removing poisoned influences. 2026-04-10
The researchers conducted extensive experiments on the MNIST dataset, testing different federated learning and unlearning methods under various attack conditions. The findings reveal that BadUnlearn significantly compromises existing FU methods. Standard aggregation techniques like FedAvg, Median, and Trimmed-Mean were particularly vulnerable, as they failed to remove the influence of malicious clients. Furthermore, FedRecover, a commonly used unlearning method, proved ineffective against BadUnl...
23
Blockchain 6G-Based Wireless Network Security Management with Optimization Using Machine Learning Techniques 2024-09-22
Blockchain 6G-Based Wireless Network Security Management with Optimization Using Machine Learning Techniques --- Figure 4 illustrates the general trend in packet loss rates for all techniqu the number of malicious nodes displaying aggressive behaviour.In ord Trusted Route Detection, only trusted nodes that are accessed are taken into is achieved by combining MN node evaluation with the node trust factor node trust factor, and in a WSN, the trusted route aids in safe data transfe Route Detection ...