CuDA2: An approach for Incorporating Traitor Agents into Cooperative Multi-Agent Systems
Zhen Chen, Yong Liao, Youpeng Zhao, Zipeng Dai, Jian Zhao
TL;DR
This work addresses the vulnerability of cooperative multi-agent reinforcement learning to adversarial perturbations by introducing traitor agents that can indirectly disrupt victim policies. The authors formalize a Traitor Markov Decision Process (TMDP) and propose CuDA2, which uses a pre-trained Random Network Distillation (RND) module and dynamic potential-based reward shaping to guide traitors toward effective attacks without altering their optimal policy. The framework is theoretically justified and empirically validated on SMAC maps, showing that CuDA2 achieves comparable or superior disruption of victim performance across multiple MARL algorithms and traitor counts, with ablations confirming the value of the dynamic PBRS component. This work offers a realistic and efficient adversarial approach for CMARL and motivates defense strategies to enhance robustness against traitor-based disruptions in cooperative settings.
Abstract
Cooperative Multi-Agent Reinforcement Learning (CMARL) strategies are well known to be vulnerable to adversarial perturbations. Previous works on adversarial attacks have primarily focused on white-box attacks that directly perturb the states or actions of victim agents, often in scenarios with a limited number of attacks. However, gaining complete access to victim agents in real-world environments is exceedingly difficult. To create more realistic adversarial attacks, we introduce a novel method that involves injecting traitor agents into the CMARL system. We model this problem as a Traitor Markov Decision Process (TMDP), where traitors cannot directly attack the victim agents but can influence their formation or positioning through collisions. In TMDP, traitors are trained using the same MARL algorithm as the victim agents, with their reward function set as the negative of the victim agents' reward. Despite this, the training efficiency for traitors remains low because it is challenging for them to directly associate their actions with the victim agents' rewards. To address this issue, we propose the Curiosity-Driven Adversarial Attack (CuDA2) framework. CuDA2 enhances the efficiency and aggressiveness of attacks on the specified victim agents' policies while maintaining the optimal policy invariance of the traitors. Specifically, we employ a pre-trained Random Network Distillation (RND) module, where the extra reward generated by the RND module encourages traitors to explore states unencountered by the victim agents. Extensive experiments on various scenarios from SMAC demonstrate that our CuDA2 framework offers comparable or superior adversarial attack capabilities compared to other baselines.
