Table of Contents
Fetching ...

CuDA2: An approach for Incorporating Traitor Agents into Cooperative Multi-Agent Systems

Zhen Chen, Yong Liao, Youpeng Zhao, Zipeng Dai, Jian Zhao

TL;DR

This work addresses the vulnerability of cooperative multi-agent reinforcement learning to adversarial perturbations by introducing traitor agents that can indirectly disrupt victim policies. The authors formalize a Traitor Markov Decision Process (TMDP) and propose CuDA2, which uses a pre-trained Random Network Distillation (RND) module and dynamic potential-based reward shaping to guide traitors toward effective attacks without altering their optimal policy. The framework is theoretically justified and empirically validated on SMAC maps, showing that CuDA2 achieves comparable or superior disruption of victim performance across multiple MARL algorithms and traitor counts, with ablations confirming the value of the dynamic PBRS component. This work offers a realistic and efficient adversarial approach for CMARL and motivates defense strategies to enhance robustness against traitor-based disruptions in cooperative settings.

Abstract

Cooperative Multi-Agent Reinforcement Learning (CMARL) strategies are well known to be vulnerable to adversarial perturbations. Previous works on adversarial attacks have primarily focused on white-box attacks that directly perturb the states or actions of victim agents, often in scenarios with a limited number of attacks. However, gaining complete access to victim agents in real-world environments is exceedingly difficult. To create more realistic adversarial attacks, we introduce a novel method that involves injecting traitor agents into the CMARL system. We model this problem as a Traitor Markov Decision Process (TMDP), where traitors cannot directly attack the victim agents but can influence their formation or positioning through collisions. In TMDP, traitors are trained using the same MARL algorithm as the victim agents, with their reward function set as the negative of the victim agents' reward. Despite this, the training efficiency for traitors remains low because it is challenging for them to directly associate their actions with the victim agents' rewards. To address this issue, we propose the Curiosity-Driven Adversarial Attack (CuDA2) framework. CuDA2 enhances the efficiency and aggressiveness of attacks on the specified victim agents' policies while maintaining the optimal policy invariance of the traitors. Specifically, we employ a pre-trained Random Network Distillation (RND) module, where the extra reward generated by the RND module encourages traitors to explore states unencountered by the victim agents. Extensive experiments on various scenarios from SMAC demonstrate that our CuDA2 framework offers comparable or superior adversarial attack capabilities compared to other baselines.

CuDA2: An approach for Incorporating Traitor Agents into Cooperative Multi-Agent Systems

TL;DR

This work addresses the vulnerability of cooperative multi-agent reinforcement learning to adversarial perturbations by introducing traitor agents that can indirectly disrupt victim policies. The authors formalize a Traitor Markov Decision Process (TMDP) and propose CuDA2, which uses a pre-trained Random Network Distillation (RND) module and dynamic potential-based reward shaping to guide traitors toward effective attacks without altering their optimal policy. The framework is theoretically justified and empirically validated on SMAC maps, showing that CuDA2 achieves comparable or superior disruption of victim performance across multiple MARL algorithms and traitor counts, with ablations confirming the value of the dynamic PBRS component. This work offers a realistic and efficient adversarial approach for CMARL and motivates defense strategies to enhance robustness against traitor-based disruptions in cooperative settings.

Abstract

Cooperative Multi-Agent Reinforcement Learning (CMARL) strategies are well known to be vulnerable to adversarial perturbations. Previous works on adversarial attacks have primarily focused on white-box attacks that directly perturb the states or actions of victim agents, often in scenarios with a limited number of attacks. However, gaining complete access to victim agents in real-world environments is exceedingly difficult. To create more realistic adversarial attacks, we introduce a novel method that involves injecting traitor agents into the CMARL system. We model this problem as a Traitor Markov Decision Process (TMDP), where traitors cannot directly attack the victim agents but can influence their formation or positioning through collisions. In TMDP, traitors are trained using the same MARL algorithm as the victim agents, with their reward function set as the negative of the victim agents' reward. Despite this, the training efficiency for traitors remains low because it is challenging for them to directly associate their actions with the victim agents' rewards. To address this issue, we propose the Curiosity-Driven Adversarial Attack (CuDA2) framework. CuDA2 enhances the efficiency and aggressiveness of attacks on the specified victim agents' policies while maintaining the optimal policy invariance of the traitors. Specifically, we employ a pre-trained Random Network Distillation (RND) module, where the extra reward generated by the RND module encourages traitors to explore states unencountered by the victim agents. Extensive experiments on various scenarios from SMAC demonstrate that our CuDA2 framework offers comparable or superior adversarial attack capabilities compared to other baselines.

Paper Structure

This paper contains 26 sections, 15 equations, 9 figures, 1 algorithm.

Figures (9)

  • Figure 1: CuDA2 framework. First, as indicated by the green box in the figure, we need to define the target that the traitors intend to attack: pre-training and saving a model of the victim agents. Second, as shown by the gray box in the figure, we also need to pre-train the RND module within the strategy where the traitors take random actions. This can reduce the prediction error caused by the state changes of the victim agents resulting from the traitors' random actions. Finally, before training the traitors, we will load the victim agents model. During the training process, we use the pre-trained RND module as a potential function to provide the traitors with intrinsic rewards through the dynamic PBRS method.
  • Figure 2: (6+2)m-vs-6m Map in StarCraft II. We customize a map to train the traitors, where the two traitors are circled in red, the six victim agents are circled in green, and the six enemies are circled in blue. The traitors' goal is to reduce the win rate of victim agents.
  • Figure 3: Deep neural network architecture for RND. $N$ is the number of victim agents. This architecture is used in both our method and the ablation experiments.
  • Figure 4: We test our method under different MARL algorithms in (6+1)m-vs-6m maps in comparison to the baseline method.
  • Figure 5: Snapshots of our method and baseline method. (a) The traitors remain stationary. (b) The traitors take random actions. (c) The traitors are trained using the same algorithm as the victim agents, with their reward function being the negative of the victim agents' reward. (d) the traitors receive extra rewards provided by the CuDA2 framework.
  • ...and 4 more figures