CCA Reimagined: An Exploratory Study of Large Language Models for Congestion Control
Xiaoxuan Qin, Yufei Wang, Longfei Shangguan
Abstract
In this paper, we conduct an emulation-guided study to systematically investigate the feasibility of Large language model (LLM)-driven congestion control. The exploration is structured into two phases. The first phase derisks the whole capability where we isolate the role of LLM on a single yet crucial congestion avoidance phase so that we can safely examine when to invoke the LLM, what information to provide, and how to formulate LLM instructions. Based on the gained insights, we extend LLM's role to multiple congestion control phase and propose a more generic LLM-based congestion control policy. Our evaluation on both static and dynamic network traces demonstrates that the LLM-based solution can reduce latency by up to 50\% with only marginal throughput sacrifice (e.g., less than 0.3\%) compared to traditional CCAs. Overall, our exploration study confirms the potential of LLMs for adaptive and general congestion control, demonstrating that when granted appropriate control freedom and paired with an effective triggering mechanism, LLM-based policies achieve significant performance gains, particularly under highly dynamic network conditions.
