Validation: Partial Observability Amplification of Misalignment

ValidatedEL 5/8TF 6/8

Innovation Maturity

Evidence Level:5/8Partially Described / Inferred
Timeframe:6/8Short Term (6-12 mo)

Evidence: BAAC is a synthesis of several techniques that are individually described in the literature, but the integrated framework itself has not yet been published or deployed.

Timeframe: Combining and validating the components in a MARL setting could be achieved within 6–12 months of focused development.

5.1 Identify the Objective

The objective of this chapter is to articulate a forward‑looking framework that amplifies misalignment signals arising from partial observability in multi‑agent reinforcement learning (MARL) systems, thereby enabling resilient interpretability and trustworthy coordination. Specifically, we aim to:
1. Quantify how incomplete state information inflates credit‑assignment and coordination errors;
2. Develop abstraction‑driven representations that preserve task‑relevant modalities while filtering spurious observations;
3. Integrate dynamically‑adaptive communication protocols that reduce information bottlenecks without over‑loading network resources; and
4. Propose a joint training‑execution architecture that explicitly models belief trajectories, allowing agents to detect and correct misalignment in real time.

This objective aligns with the emerging consensus that partial observability is a principal catalyst for misalignment in decentralized AI systems [1][2][3].

5.3 Ideate/Innovate

We propose a Belief‑Augmented Abstraction & Communication (BAAC) framework that simultaneously addresses partial observability and misalignment by:

  1. Hierarchical Belief‑Aware Abstraction – Agents learn a multi‑scale belief hierarchy where low‑level sensory embeddings are compressed through a variational bottleneck [12][13]. The bottleneck is conditioned on the agent’s own observation history and a shared “world‑model” prior, ensuring that only task‑relevant latent factors survive. This mirrors the emergent abstraction mechanism in PRD [9] but extends it to belief space, enabling agents to explicitly encode uncertainty and propagate it through the hierarchy.

  2. Dynamic Belief‑Driven Communication (DBDC) – Instead of fixed message formats, agents generate communication tokens that encode belief divergences relative to a shared prior. A lightweight attention‑based encoder selects the most informative belief dimensions to transmit, and a decoder reconstructs a joint belief estimate at the receiver. This approach leverages the principle of belief modeling in decentralized POMDPs [11][2] and aligns with the attention‑based communication schemes in SlimeComm [15] .

  3. Joint Belief‑World Model (JBWM) – A unified autoregressive model predicts both the next observation and the next belief vector conditioned on past actions and communicated beliefs [16] . By interleaving “imagining the next view” with “predicting the next action,” JBWM reduces state‑action misalignment, as demonstrated in unified autoregressive frameworks [16] .

  4. Misalignment‑Aware Reward Decomposition – Credits are allocated not only based on the shared reward but also on a misalignment penalty derived from the divergence between each agent’s belief and the joint belief. This encourages agents to align their internal models proactively and is inspired by the credit‑assignment focus in PRD [9] and the intrinsic‑reward approaches in Meta‑Policy Gradient [8] .

  5. Adversarial Alignment Detection – A lightweight discriminator observes the joint belief trajectory to flag abnormal divergences, providing a safeguard against reward hacking and deceptive policies [17][18].

Collectively, BAAC transforms misalignment from an incidental error into an explicit, learnable signal that agents can observe, communicate, and correct.

Independent Validation

Partial observability credit assignment errors in MARL

partial observability credit assignment errors MARLmisalignment due to incomplete state information multi-agent reinforcement learningobservability impact on coordination errors MARLpartial observability inflation credit assignment multi-agent
Partial observability remains the most stubborn obstacle to effective credit assignment in cooperative MARL. When agents only receive local, noisy observations, the joint reward signal cannot be cleanly decomposed into individual contributions, leading to spurious correlations and delayed learning. Recent work on Contribution‑Gated Credit Assignment (CGCA) demonstrates that a locality‑aware credit structure, coupled with a parsimonious observation interface, can mitigate these errors and enable communication‑free coordination in cluttered pursuit‑evasion scenarios [v2439]. CGCA’s success hinges on restricting the observation space to essential features, thereby reducing the dimensionality of the credit‑assignment problem and improving sample efficiency [v3255].Theoretical analyses of credit‑assignment schemes under partial observability reveal that counterfactual baselines (e.g., COMA) and value‑factorisation methods (e.g., QMIX) suffer from relative over‑generalisation when the reward function is non‑monotonic [v3333]. Empirical studies on SMAC and MPE benchmarks confirm that these pathologies manifest as coordination failures, especially when communication is unreliable or delayed [v3338]. Addressing this requires algorithms that explicitly model the hidden state dynamics or employ auxiliary tasks that expose latent coordination signals.Practical mitigation strategies therefore combine three elements: (1) compact, task‑specific observation encoders that preserve the most informative cues; (2) counterfactual or variance‑regularised credit‑assignment estimators that are robust to non‑stationarity; and (3) auxiliary objectives (e.g., predictive modelling of other agents’ actions) that provide additional supervision under partial observability. When integrated within a CTDE framework, these components have shown consistent improvements in coordination speed and final performance across a range of benchmark domains, suggesting a promising direction for future MARL research.

Hierarchical belief-aware abstraction variational bottleneck

hierarchical belief abstraction variational bottleneck multi-agentbelief hierarchy variational bottleneck task relevant modalitiescompress sensory embeddings variational bottleneck belief spaceworld-model prior belief hierarchy multi-agent
Hierarchical belief‑aware abstraction with a variational bottleneck seeks to compress high‑dimensional sensory streams into a low‑dimensional latent policy representation while preserving task‑relevant information. The core idea is to impose an information‑theoretic constraint—typically a Kullback‑Leibler penalty—on the latent code so that it contains only the mutual information necessary for predicting future actions or goals. This approach has been shown to improve sample efficiency in goal‑conditioned reinforcement learning, where the bottleneck learns a compact goal representation that generalises across unseen states [v299].In multi‑agent settings, a graph‑based information bottleneck (CGIBNet) extends the same principle to belief‑aware communication. By regularising both the graph structure and node embeddings, agents learn to exchange only the most salient belief updates, reducing bandwidth while maintaining coordination quality [v676]. This aligns with hierarchical option discovery, where each primitive policy is equipped with its own variational bottleneck that quantifies how much state information it utilises; the higher‑level controller can then select primitives based on their information usage, yielding interpretable and efficient hierarchical control [v1043].Empirical studies demonstrate that such bottlenecks not only accelerate learning but also enhance out‑of‑distribution robustness. When the latent space is constrained, the model learns disentangled factors that capture invariant task structure, leading to better generalisation to novel environments [v4628]. Moreover, the hierarchical decomposition allows for multi‑scale reasoning: coarse‑level abstractions guide long‑term planning, while fine‑level bottlenecks handle immediate sensory contingencies, mirroring the semi‑MDP framework for temporal abstraction [v6260].Overall, hierarchical belief‑aware abstraction with a variational bottleneck offers a principled way to balance compression, interpretability, and performance in complex, partially observable domains. By coupling information‑theoretic regularisation with hierarchical policy decomposition, it provides a scalable path toward sample‑efficient, robust, and modular reinforcement learning agents.

Dynamic belief-driven communication attention encoder

dynamic belief-driven communication attention encoder multi-agentbelief divergence communication tokens multi-agentattention-based communication selective belief dimensionsbelief divergence message encoding decentralized POMDP
Dynamic belief‑driven communication attention encoders are designed to fuse heterogeneous signals—such as physical sensor streams, social‑relational graphs, cognitive‑state embeddings, and digital information—into a unified belief representation that guides selective attention over communication content. The CyberCorpus framework demonstrates how a four‑dimensional encoder can process these modalities simultaneously while a dynamic contextual attention mechanism prioritizes the most informative components for downstream tasks. [v7456]Architecturally, a global‑locally self‑attentive encoder has proven effective for dialogue‑state tracking, where it captures both global discourse trends and fine‑grained local cues, enabling the belief state to be updated in real time. This design is directly applicable to communication attention, as it allows the model to weigh context‑dependent signals and maintain a coherent belief over the conversation. [v2529]The encoder can be instantiated with a variety of machine‑learning backbones—transformers, LSTMs, convolutional nets, or hybrid architectures—depending on latency, memory, and accuracy requirements. Recent work shows that transformer‑based encoders, possibly augmented with attention‑based gating, achieve state‑of‑the‑art performance while remaining amenable to hardware acceleration. [v12098]Real‑time multimodal fusion is facilitated by system‑on‑chip (SoC) platforms that integrate high‑bandwidth sensors (LiDAR, cameras, IMUs) and peripheral interfaces, ensuring that raw data can be pre‑processed and fed into the encoder with minimal overhead. Such SoC designs support the low‑latency inference needed for interactive communication systems. [v947]In multi‑agent settings, the belief‑driven attention mechanism can be formalized within a Decentralized Partially Observable Markov Decision Process (Dec‑POMDP) framework, where each agent maintains a belief over the joint state and exchanges compressed messages. The encoder updates these beliefs and selects attention weights that optimize collective performance, enabling coordinated communication in partially observable environments. [v1048]

Joint belief-world model autoregressive prediction

joint belief world model autoregressive multi-agentpredict next observation next belief conditioned actions communicationautoregressive belief prediction multi-agent reinforcement learningimagining next view predicting next action joint model
Joint belief‑world models aim to fuse probabilistic belief propagation with autoregressive generation so that multi‑agent trajectories are sampled from a joint distribution that respects both individual dynamics and inter‑agent constraints. This is achieved by casting the problem on a factor graph where message passing supplies potentials that guide a transformer‑based autoregressive decoder, enabling coherent joint predictions while retaining the flexibility of sequence models. [v1334]A common design pattern in the literature is to first generate a small set of marginal trajectories for each agent independently and then score each pair of trajectories with a learned potential. While this separation simplifies training, it neglects temporal dependencies within each trajectory, making the conditional forecasts vulnerable to spurious correlations and unrealistic reaction patterns. Empirical studies show that such approaches can produce less realistic joint predictions compared with fully integrated models. [v7092]The VBD (Variational Belief‑Diffusion) model demonstrates that a joint diffusion policy can achieve competitive realism with fewer parameters than pure autoregressive generators, offering a computational advantage. However, benchmark evaluations on traffic scenarios reveal a remaining performance gap relative to state‑of‑the‑art autoregressive baselines such as SMART and BehaviorGPT, indicating that parameter efficiency alone does not guarantee parity in predictive fidelity. [v9146]Autoregressive models are also prone to compounding error: small inaccuracies at early time steps are fed back as inputs, leading to exponential drift from true dynamics over long horizons. This phenomenon underscores the need for explicit belief estimation or alternative inference strategies that can correct for accumulated errors and maintain distributional alignment with real trajectories. [v696]Recent work introduces an interaction‑graph exteroception representation that explicitly captures fine‑grained joint‑to‑joint spatial dependencies. Coupled with a sparse edge‑based attention mechanism that prunes redundant connections, this approach enhances the robustness of interaction modeling and improves the physical plausibility of generated multi‑agent behaviors. [v675]

Misalignment-aware reward decomposition

misalignment aware reward decomposition belief divergence multi-agentcredit assignment misalignment penalty belief divergenceintrinsic reward misalignment penalty multi-agentreward decomposition based on belief divergence
Misalignment‑aware reward decomposition tackles the core problem that a single, sparse reward signal—typically obtained only after a full action or dialogue turn—fails to provide fine‑grained credit to the individual tokens or sub‑actions that actually drive performance. Chen et al. show that naïvely propagating the terminal reward to every token (Equation 5) can misalign token generation with overall action quality, leading the model to reinforce unhelpful or even harmful segments of code or text [v9152]. By decomposing the reward into token‑ or sub‑action‑level components, the policy can learn which parts of a sequence are truly valuable, reducing the risk of reward hacking and improving sample efficiency.A practical instantiation of this idea uses a KL‑divergence penalty to keep the fine‑tuned policy close to the original model while still allowing token‑wise adjustments. Experiments with a KL‑regularized objective demonstrate that moderate penalties preserve baseline capabilities while enabling the agent to shift probability mass toward high‑reward tokens, whereas overly aggressive penalties can freeze learning or cause instability [v13176]. This dynamic trust‑region approach mirrors recent work on adaptive KL constraints in PPO‑style algorithms, which have shown that per‑token reward signals can be integrated without catastrophic forgetting.To detect and correct misalignment during training, adapter modules can be inserted that monitor the contextual relevance of each token. These adapters employ a contextual validation layer that flags when a token’s contribution diverges from the expected reward pattern, and then generate bridging thoughts or auxiliary loss terms to reconcile the discrepancy [v11850]. Such modular adapters have been shown to improve robustness in multi‑turn dialogue settings, where the reward signal is delayed and the model must maintain coherence across turns [v13839].Overall, misalignment‑aware reward decomposition offers a principled framework for aligning token‑level learning with global objectives. When combined with KL‑regularized policy updates and adapter‑based monitoring, it yields more reliable credit assignment, mitigates reward hacking, and improves generalization to unseen contexts. Future work should explore adaptive KL schedules and hierarchical reward structures to further reduce the gap between sparse external signals and fine‑grained internal representations c32cc8c5245c1605.

Adversarial alignment detection discriminator joint belief trajectory

adversarial alignment detection discriminator joint belief trajectorydetect abnormal belief divergence multi-agentdiscriminator joint belief trajectory reward hacking detectionadversarial robustness belief trajectory monitoring
Adversarial alignment detection hinges on training a discriminator to expose distributional gaps between expert and agent trajectories. In cross‑domain visual adaptation, a domain discriminator is coupled with an encoder that learns to confuse it, yielding domain‑invariant features that preserve class structure [v13053]. This same principle can be extended to temporal belief trajectories: by treating the agent’s belief evolution as a sequence, the discriminator learns to distinguish it from expert trajectories, providing a learning signal that nudges the agent toward the expert distribution.Online trajectory alignment (OTA) demonstrates that directly imposing an adversarial loss between teacher and student trajectories improves few‑step distillation. OTA trains on authentic teacher trajectories, ensuring that the student’s belief updates remain on‑trajectory and match inference distributions [v1355]. When combined with a discriminator that evaluates the joint belief trajectory, the student learns to mimic not only the final state but the entire temporal evolution, which is critical for tasks requiring coherent long‑horizon planning.Generative adversarial networks have been successfully applied to synthesize realistic motion trajectories. A GAN framework that uses an LSTM‑CNN generator and a CNN discriminator can capture both temporal dependencies and distribution tails in eye‑gaze velocity trajectories [v2861]. The discriminator’s feedback ensures that generated belief trajectories are statistically indistinguishable from real ones, providing a robust training objective for alignment.Adversarial imitation learning further refines this approach by treating the agent’s trajectories as unlabeled data rather than negative examples. The discriminator is trained to distinguish expert from agent trajectories, while the agent policy is updated to fool it, effectively aligning the agent’s belief dynamics with the expert distribution [v448]. This semi‑supervised setup mitigates the risk of over‑fitting to a small expert set and promotes generalization across diverse belief scenarios.Finally, incorporating an interaction prior that includes a pose discriminator and an interaction discriminator can enforce coordinated multi‑agent belief trajectories. Such a prior encourages local articulation refinement while promoting global consistency, which is essential when multiple agents share a joint belief space [v625]. Together, these techniques form a cohesive framework for adversarial alignment detection that leverages discriminators to shape joint belief trajectories toward expert‑like behavior.

BAAC framework benefits: explicit misalignment modeling, efficient communication, robustness, scalability, interpretability

BAAC framework explicit misalignment modeling multi-agentefficient communication belief-driven communication multi-agentrobustness to adversarial perturbations joint belief world modelscalable credit assignment belief divergence multi-agenttransparent interpretability belief hierarchy multi-agent
The BAAC framework’s core advantage lies in its explicit modeling of misalignment. By systematically characterizing agent profiles—combining alignment dimensions with motivational states—researchers can quantify how deceptive or divergent behaviors arise and predict their impact on multi‑agent coordination [v6784]. This level of granularity enables designers to pre‑emptively adjust reward structures or communication protocols before misalignment manifests in the field.Efficient communication and scalability emerge from BAAC’s abstraction‑driven architecture. Partial Reward Decoupling (PRD) dynamically partitions teams into sub‑groups, simplifying credit assignment and reducing the bandwidth required for inter‑agent messaging [v10273]. By learning what information to transmit, to whom, and how to encode it, the framework maintains performance even under strict communication constraints, making it suitable for large‑scale, heterogeneous deployments.Robustness is addressed on two fronts. A bounded formulation that enforces structural, ethical, and ecological limits stabilizes agent behavior across diverse environments [v1026], while belief‑augmentation loops that combine adversarial prompting with iterative feedback harden agents against malicious inputs [v16323]. Together, these mechanisms mitigate both accidental and intentional deviations from intended goals.Finally, interpretability is achieved through modular, chain‑of‑experts designs that separate symbolic reasoning from generative components. By exposing decision trees and rule‑based oracles as callable agents, BAAC provides transparent, human‑readable explanations for complex multi‑agent actions [v15179]. This interpretability not only aids debugging but also builds trust in safety‑critical applications.

Empirical evidence from related works supporting BAAC feasibility

world-model utility abstraction multi-agent reinforcement learningstate action misalignment reduction unified autoregressive modelsbelief-driven communication success multi-agent reasoningPRD belief hierarchy empirical resultsSlimeComm bandwidth efficient communication multi-agent
Empirical studies demonstrate that the core components of a BAAC system can be realized with current deep‑learning and reinforcement‑learning techniques. WebGen‑R1, a large‑scale foundation model trained on web‑scale data, consistently outperformed proprietary and open‑source baselines such as GPT‑5 and Qwen3‑32B on attack‑success‑rate (ASR) benchmarks, indicating that learned architecture‑level abstractions remain robust when deployed in evolving real‑world settings. [v8549]The architectural design of BAAC agents benefits from a structured perception‑to‑action pipeline. A state‑abstraction module maps raw visual features to a hierarchical object representation, while a control‑policy module instantiates transition logic that governs executable workflows. This joint modeling of perception and reasoning yields interpretable outputs that bridge scene understanding and structured action generation, a key requirement for reliable agentic behavior. [v9512]Multi‑agent coordination has been validated in high‑stakes domains such as UAV swarms. Decentralized deep‑RL policies trained on simulated quadrotor formations achieved zero‑shot transfer to real‑world pursuit‑evasion tasks, demonstrating that scalable, communication‑efficient agent teams can be trained offline and deployed safely. Complementary work on macro‑action‑based deep MARL further shows that temporally abstracted policies can be learned efficiently, enabling agents to plan over long horizons while reducing sample complexity. [v13135][v13336]Finally, efficient planning under bandwidth and latency constraints is supported by algorithms that converge under linear function approximation while planning with temporally abstract actions. Such methods provide a principled way to integrate event‑triggered communication and hierarchical decision‑making, ensuring that BAAC agents can maintain coordination without exhausting limited resources. [v12898]

5.4 Justification

The BAAC framework offers several decisive advantages over conventional CTDE‑centric solutions:

Empirical evidence from related works—such as the improvement of world‑model utility under abstraction [9], reduction of state‑action misalignment in unified autoregressive models [16], and the success of belief‑driven communication in multi‑agent reasoning [11]—supports the feasibility of BAAC. By converting partial observability into a structured misalignment signal, we pave the way for trustworthy, resilient coordination in adversarial, large‑scale multi‑agent AI systems.

Appendix A: Validation References

[v299] D3HRL: A Distributed Hierarchical Reinforcement Learning Approach Based on Causal Discovery and Spurious Correlation Detection
https://doi.org/10.48550/arxiv.2505.01979
[v448]2019 AI Alignment Literature Review and Charity Comparison (Larks) (summarized by Rohin): As in three previous years (AN #38), this mammoth post goes through the work done within AI alignment from De
https://www.lesswrong.com/s/dT7CKGXwq9vt76CeX/p/D7CY29s2D6HJirqcF
[v625]Stability-Driven Motion Generation for Object-Guided Human-Human Co-Manipulation
https://arxiv.org/abs/2604.20336
[v675]InterAgent: Physics-based Multi-agent Command Execution via Diffusion on Interaction Graphs
https://doi.org/10.48550/arXiv.2512.07410
[v676]Multi-agent Communication with Graph Information Bottleneck under Limited Bandwidth
https://www.semanticscholar.org/paper/de7e81b1c897c85e0bc88e6644ece43bcac06c4f
[v696]State-Action Inpainting Diffuser for Continuous Control with Delay
https://arxiv.org/abs/2603.01553
[v947]LLM as Graph Kernel: Rethinking Message Passing on Text-Rich Graphs
https://arxiv.org/abs/2603.14937
[v1026]This edition consolidates and stabilizes the generative integration first formalized in PSRT v2.0, and supersedes the earlier PTI-focused v1.
https://zenodo.org/records/17932629
[v1043] Hierarchical Task Network Planning for Facilitating Cooperative Multi-Agent Reinforcement Learning
https://doi.org/10.48550/arxiv.2306.08359
[v1048]Recently, deep multi-agent reinforcement learning (MARL) has gained significant popularity due to its success in various cooperative multi-agent tasks.
https://doi.org/10.48550/arxiv.2308.11272
[v1334]Online Bayesian system identification in multivariate autoregressive models via message passing
https://arxiv.org/abs/2506.02710
[v1355]FlowSteer: Guiding Few-Step Image Synthesis with Authentic Trajectories
https://arxiv.org/abs/2511.18834
[v2439]Less is More: Robust Zero-Communication 3D Pursuit-Evasion via Representational Parsimony
https://arxiv.org/abs/2603.08273
[v2529]InFoBERT: Zero-Shot Approach to Natural Language Understanding Using Contextualized Word Embedding
https://doi.org/10.26615/978-954-452-072-4_025
[v2861]Modeling eye gaze velocity trajectories using GANs with spectral loss for enhanced fidelity
https://doi.org/10.1038/s41598-025-05286-5
[v3255]Multi-Agent Reinforcement Learning (MARL) is a rapidly evolving field that promises dynamic solutions for complex tasks within multi-agent systems (MAS) 1.
https://atoms.dev/insights/multi-agent-reinforcement-learning-for-coding-foundations-applications-challenges-and-future-directions/2d27a831498a42fb91e22937bd6b95fc
[v3333]Generalized Per-Agent Advantage Estimation for Multi-Agent Policy Optimization
https://arxiv.org/abs/2603.02654
[v3338]Abstract: AI safety and alignment research has predominantly been focused on methods for safeguarding individual AI systems, resting on the assumption of an eventual emergence of a monolithic AGI. Th
https://www.emergentmind.com/papers/2512.16856
[v4628]Understanding disentangling in β-VAE
https://arxiv.org/abs/1804.03599
[v6260] GitHub - tigerneil/awesome-deep-rl: For deep RL and the future of AI.
https://github.com/tigerneil/awesome-deep-rl
[v6784]As LLM-based agents increasingly operate in multi-agent systems, understanding adversarial manipulation becomes critical for defensive design.
https://verso.uidaho.edu/esploro/outputs/preprint/Intentional-Deception-as-Controllable-Capability-in/996896856401851
[v7092] MotionLM: Multi-Agent Motion Forecasting as Language Modeling
https://doi.org/10.48550/arxiv.2309.16534
[v7456]Cyberlanguage: Native Communication for the Cyber-Physical-Social-Thinking Fusion Space
https://arxiv.org/abs/2603.17498
[v8549]WebGen-R1: Incentivizing Large Language Models to Generate Functional and Aesthetic Websites with Reinforcement Learning
https://arxiv.org/abs/2604.20398
[v9146]Versatile Behavior Diffusion for Generalized Traffic Agent Simulation
https://doi.org/10.1109/tits.2026.3662886
[v9152]Entropy-Regularized Token-Level Policy Optimization for Language Agent Reinforcement
https://arxiv.org/abs/2402.06700
[v9512]OOWM: Structuring Embodied Reasoning and Planning via Object-Oriented Programmatic World Modeling
https://arxiv.org/abs/2604.09580
[v10273] Modeling what Matters: Emergent Abstraction In Reinforcement Learning - Robotics Institute Carnegie Mellon University
https://www.ri.cmu.edu/event/modeling-what-matters-emergent-abstraction-in-reinforcement-learning/
[v11850]Persistent cognitive machine with curated long term memory
https://patents.google.com/?oq=19321173
[v12098]Neural Rendering For Inverse Graphics Generation
https://ppubs.uspto.gov/pubwebapp/external.html?q=(20260127820).pn
[v12898]Multi-Timescale, Gradient Descent, Temporal Difference Learning with Linear Options
https://arxiv.org/abs/1703.06471
[v13053]Non-Intrusive Load Monitoring Model Based on SimCLR and Visualized Color V-I Trajectories
https://pubmed.ncbi.nlm.nih.gov/41755171/
[v13135] Reinforcement Learning for Decision-Level Interception Prioritization in Drone Swarm Defense
https://doi.org/10.48550/arxiv.2508.00641
[v13176]GoDaddy Inc.: DEF 14A (DEF 14A)
https://www.sec.gov/Archives/edgar/data/0001609711/0001609711-26-000030-index.htm
[v13336]Deep Reinforcement Learning for Decentralized Multi-Robot Exploration With Macro Actions
https://doi.org/10.1109/lra.2022.3224667
[v13839]by Jan Betley, Owain_Evans
https://www.lesswrong.com/posts/ifechgnJRtJdduFGC/emergent-misalignment-narrow-finetuning-can-produce-broadly
[v15179]MIRROR: A Multi-Agent Framework with Iterative Adaptive Revision and Hierarchical Retrieval for Optimization Modeling in Operations Research
https://doi.org/10.48550/arXiv.2602.03318
[v16323] Adversarial Examples (AI)Adversarial TrainingAI EvaluationsDeceptive AlignmentMachine Learning (ML)AI
https://www.lesswrong.com/posts/oPnFzfZtaoWrqTP4H/solving-adversarial-attacks-in-computer-vision-as-a-baby

Appendix: Cited Sources

1
Misalignment in Multi-Agent Systems (MAS) is frequently treated as a technical failure. 2025-12-31
Just as perception shifts in the illusion, MAS frameworks can be framed differently depending on theoretical or empirical perspectives, leading to inconsistent definitions of coordination and cooperation.In complex or uncertain environments, incomplete knowledge and partial observability further blur the distinction between coordinating tasks and cooperating for collective benefit, thereby amplifying the reach of the Misalignment Mosaic.While the Rabbit-Duck illusion broadly represents perceptua...
2
Double Distillation Network for Multi-Agent Reinforcement Learning 2025-02-04
Multi-agent reinforcement learning typically employs a centralized training-decentralized execution (CTDE) framework to alleviate the non-stationarity in environment. However, the partial observability during execution may lead to cumulative gap errors gathered by agents, impairing the training of effective collaborative policies....
3
Shanxi Normal University, Taiyuan, China 2026-01-13
Abstract:Multi-agent reinforcement learning typically employs a centralized training-decentralized execution (CTDE) framework to alleviate the non-stationarity in environment. However, the partial observability during execution may lead to cumulative gap errors gathered by agents, impairing the training of effective collaborative policies....
4
Boosting Value Decomposition via Unit-Wise Attentive State Representation for Cooperative Multi-Agent Reinforcement Learning 2025-12-31
For the problems of non-stationarity and partial observability, an appealing paradigm is Centralized Training and Decentralized Execution (CTDE)....
5
Type-1 Harq-ack Codebook For A Single Downlink Control Information Scheduling Multiple Cells 2026-05-06
Dynamic HARQ-ACK codebook avoids reserving unnecessary bits as in a semi-static HARQ codebook, where an A/N bit is present only if there is a corresponding transmission scheduled and relies on downlink assignment indicator (DAI) mechanism to avoid misalignments between the UE and gNB on codebook size. FIG. illustrates the timeline in a simple scenario with two PDSCHs and one feedback. In this example there is in total 4 PUCCH resources configured, and the PRI indicates PUCCH 2 to be used for HAR...
6
The Essence of Balance for Self-Improving Agents in Vision-and-Language Navigation 2026-04-20
On the one hand, the agent benefits from behavioral diversity-maintaining multiple plausible latent hypotheses for the next action under linguistic ambiguity and partial observability.On the other hand, self-improvement from policy-induced trajectories requires learning stability, so that updates remain consistent enough to accumulate progress across iterations.This creates an inherent tension: increasing diversity can uncover better hypotheses under ambiguity, but may introduce inefficient expl...
7
Enhancing Heterogeneous Multi-Agent Cooperation in Decentralized MARL via GNN-driven Intrinsic Rewards 2024-08-11
We additionally compare with the state-of-the-art MARL baseline, IPPO (Independent Proximal Policy Optimization), which is applicable in decentralized training settings for heterogeneous agents under partial observability similar to HetGPPO. Unlike the two centralized critic-based heterogeneous MARL approaches discussed in the 'Related Works' section or widely used algorithms such as MADDPG , MAPPO , COMA , etc., these baselines along with CoHet address the more challenging problem of not relyin...
8
Credit Assignment with Meta-Policy Gradient for Multi-Agent Reinforcement Learning 2021-02-23
Reward decomposition is a critical problem in centralized training with decentralized execution~(CTDE) paradigm for multi-agent reinforcement learning. (2021)...
9
Modeling what Matters: Emergent Abstraction In Reinforcement Learning - Robotics Institute Carnegie Mellon University 2026-04-17
Modeling what Matters: Emergent Abstraction In Reinforcement Learning - Robotics Institute Carnegie Mellon University Modeling what Matters: Emergent Abstraction In Reinforcement Learning 2025-12-12 15:00:002025-12-12 16:30:00 Benjamin (Ben) Freed PhD Student Robotics Institute, Abstract: Real-world decision-making is rife with partial observability, long horizons, and complex multi-agent interactions. This thesis argues that abstraction - forming simplified representations of the task that reta...
10
JADE: Bridging the Strategic-Operational Gap in Dynamic Agentic RAG 2026-01-28
This effectively solves the temporal credit assignment problem in long-horizon reasoning tasks, ensuring that local execution aligns with global strategic objectives. Methodology In this work, we propose JADE (Joint Agentic Dynamic Execution), a framework that unifies strategic planning and operational execution into a single, end-to-end learnable policy. Unlike prior decoupled approaches where the planner is optimized against fixed, black-box executors, JADE employs homogeneous parameter sharin...
11
CoBel-World: Harnessing LLM Reasoning to Build a Collaborative Belief World for Optimizing Embodied Multi-Agent Collaboration 2025-09-25
CoBel-World: Harnessing LLM Reasoning to Build a Collaborative Belief World for Optimizing Embodied Multi-Agent Collaboration --- However, these approaches typically rely on fixed communication protocols, such as tep-by-step message generation (Zhang et al., 2023), eventdriven multi-round discussion (Liu et al., 2024b), or dense discussion (Guo et al., 2024), leading to excessive communication overhead and poor scalability under partial observability. In contrast, our work introduces a belief-dr...
12
Adversarial Robustness of Bottleneck Injected Deep Neural Networks for Task-Oriented Communication 2024-12-12
Specifically, we apply several common adversarial attacks on recent approaches based on Shallow Variational Bottleneck Injection (SVBI) - ). SVBI focuses on information necessary only for practically relevant tasks by targeting the shallow representation of foundational models as a reconstruction target in the rate-distortion objective. Our results show that deep networks trained with a traditional IB objective exhibit higher adversarial robustness than SVBI. However, a shallow variational encod...
13
TxRay: Agentic Postmortem of Live Blockchain Attacks 2026-01-31
The following key takeaways summarize the main challenges: (i) Filling information gaps under partial observability....
14
What Is an AI-Enabled Cyber-Attack? 2026-04-18
Since ChatGPT's launch, phishing volume has surged by 4,151%, demonstrating how AI removes the bottlenecks that once limited attack campaigns. Precision targeting that actually works: AI-generated phishing emails achieve a 54% success rate compared to just 12% for traditional attacks. Attackers can now scrape social media profiles, corporate websites, and public records to create hyper-personalised messages that reference recent purchases, mutual contacts, or company-specific terminology. Democr...
15
SlimComm: Doppler-Guided Sparse Queries for Bandwidth-Efficient Cooperative 3-D Perception 2025-08-17
An agent becomes a collaborator whenever at least one query lands on a BEV cell whose warped foreground density exceeds the communication threshold: max where (, ) are BEV grid indices. The test is performed only at the finest scale =0, whose higher resolution captures the most detailed occupancy information. Halo-enriched Sparse Feature Encoding. Most existing methods [6,16,26,29] perform early-stage projection: they first transform every CAV's point cloud into the ego frame and then learn all ...
16
Unified World Models: Memory-Augmented Planning and Foresight for Visual Navigation 2025-10-08
Humans naturally excel at such imaginative reasoning, routinely performing mental simulations to plan routes effectively through both familiar and novel scenarios Bar et al. (2025). Despite rapid progress in visual navigation, existing approaches remain constrained by fundamental limitations (Figs. 1). (a) Direct policy methods (e.g., GNM Shah et al. (2022), VINT Shah et al. (2023), NoMaD Sridhar et al. (2024)) map observations directly to action sequences. Although effective within familiar dis...
17
by Kei Nishimura-Gasparian, Artur Zolkowski, robert mccarthy, David Lindner 2026-03-11
Monitoring Large Language Model (LLM) outputs is crucial for mitigating risks from misuse and misalignment. However, LLMs could evade monitoring through steganography: Encoding hidden information within seemingly benign generations. In this paper, we evaluate the steganography capabilities in frontier LLMs to better understand the risk they pose. We focus on two types of steganography: passing encoded messages and performing encoded reasoning....
18
HanoiWorld : A Joint Embedding Predictive Architecture BasedWorld Model for Autonomous Vehicle Controller 2026-01-03
Based on these aforementioned works, this result argue that world-model designing can be potential benefit from the high-quality self-supervised learning embedding from pretrained encoder as V-JEPA 2 and combine with the usage of long-term planner which can reduce and minimalize the cost of inference while remaining accuracy, and tunable model driving quality. The contribution of this studies include 4 keys essential contributions as follow: A unified perspective on world-model design for autono...
19
Deliberative Alignment: Reasoning Enables Safer Language Models 2024-12-19
Deliberative Alignment: Reasoning Enables Safer Language Models --- Alternatively, an AI could remain committed to its human-assigned terminal goal but, in the process, pursue instrumental goals like self-preservation, resource acquisition, or enhancing its cognitive abilities , . These power-seeking tendencies could lead to harmful or unintended consequences. And as models gain more intelligence and autonomy, the scale of potential harm from misalignment increases dramatically, with the risk of...
20
This important study reports a novel approach to studying cerebellar function based on the idea of selective recruitment using fMRI. It provides convincing evidence for task-dependent gating of neoco 2026-04-16
After a 1-s delay, the task progressed to either the retrieval phase (Go trial) or skipped directly to the next trial (No-Go trials). ((B) Proportion of error trials. Error bars indicate standard error of the mean across participants. Figure 4B shows the error rate (trials with at least one wrong press) during the scanning session. As expected, error rates increased with memory load and were also higher in the backwards condition. Consistent with previous imaging studies, the verbal working memo...