Table of Contents
Fetching ...

Distributed Covariance Steering via Non-Convex ADMM for Large-Scale Multi-Agent Systems

Augustinos D. Saravanos, Isin M. Balci, Arshiya Taj Abdul, Efstathios Bakolas, Evangelos A. Theodorou

Abstract

This paper studies the problem of steering large-scale multi-agent stochastic linear systems between Gaussian distributions under probabilistic collision avoidance constraints. We introduce a family of \textit{distributed covariance steering (DCS)} methods based on the Alternating Direction Method of Multipliers (ADMM), each offering different trade-offs between conservatism and computational efficiency. The first method, Full-Covariance-Consensus (FCC)-DCS, enforces consensus over both the means and covariances of neighboring agents, yielding the least conservative safe solutions. The second approach, Partial-Covariance-Consensus (PCC)-DCS, leverages the insight that safety can be maintained by exchanging only partial covariance information, reducing computational demands. The third method, Mean-Consensus (MC)-DCS, provides the most scalable alternative by requiring consensus only on mean states. Furthermore, we establish novel convergence guarantees for distributed ADMM with iteratively linearized non-convex constraints, covering a broad class of consensus optimization problems. This analysis proves convergence to stationary points for PCC-DCS and MC-DCS, while the convergence of FCC-DCS follows from standard ADMM theory. Simulations in 2D and 3D multi-agent environments verify safety, illustrate the trade-offs between methods, and demonstrate scalability to thousands of agents.

Distributed Covariance Steering via Non-Convex ADMM for Large-Scale Multi-Agent Systems

Abstract

This paper studies the problem of steering large-scale multi-agent stochastic linear systems between Gaussian distributions under probabilistic collision avoidance constraints. We introduce a family of \textit{distributed covariance steering (DCS)} methods based on the Alternating Direction Method of Multipliers (ADMM), each offering different trade-offs between conservatism and computational efficiency. The first method, Full-Covariance-Consensus (FCC)-DCS, enforces consensus over both the means and covariances of neighboring agents, yielding the least conservative safe solutions. The second approach, Partial-Covariance-Consensus (PCC)-DCS, leverages the insight that safety can be maintained by exchanging only partial covariance information, reducing computational demands. The third method, Mean-Consensus (MC)-DCS, provides the most scalable alternative by requiring consensus only on mean states. Furthermore, we establish novel convergence guarantees for distributed ADMM with iteratively linearized non-convex constraints, covering a broad class of consensus optimization problems. This analysis proves convergence to stationary points for PCC-DCS and MC-DCS, while the convergence of FCC-DCS follows from standard ADMM theory. Simulations in 2D and 3D multi-agent environments verify safety, illustrate the trade-offs between methods, and demonstrate scalability to thousands of agents.

Paper Structure

This paper contains 25 sections, 108 equations, 8 figures, 3 tables, 3 algorithms.

Figures (8)

  • Figure 1: Illustration of inter-agent constraint components via confidence ball separation in the PCC-DCS method.
  • Figure 2: Two-agent illustrative 2D task. The top, middle and bottom rows correspond to FCC-, PCC- and MC-DCS, respectively. The samples illustrate $100$ realizations of the distributions of the agents. The left column shows their full distribution trajectories, while the remaining subfigures on the right, show with solid ellipses the $99.7 \%$ confidence regions of their distributions at $k=10,15,20$. The dashed/dotted ellipses show their initial/target distributions. The black shapes are obstacles to be avoided.
  • Figure 3: Safety distances for two-agent 2D task. Left: Inter-agent distance. Right: Distance between agent 1 and obstacle 1. Results shown for $100$ realizations over the time horizon.
  • Figure 4: Multi-drone 3D task with FCC-DCS. The three subfigures show the $99.7 \%$ confidence ellipsoids of the initial and target distributions of the agents (faded colors) and of their current distributions (solid colors). The dark gray shapes are obstacles.
  • Figure 5: Multi-agent 2D task with 32 agents via FCC-DCS. The three subfigures correspond to time instants $k=10,20,30$.
  • ...and 3 more figures

Theorems & Definitions (11)

  • proof
  • proof
  • proof
  • proof
  • proof
  • proof
  • proof
  • proof
  • proof
  • proof
  • ...and 1 more