Table of Contents
Fetching ...

GrandCode: Achieving Grandmaster Level in Competitive Programming via Agentic Reinforcement Learning

DeepReinforce Team, Xiaoya Li, Xiaofei Sun, Guoyin Wang, Songqiao Su, Chris Shum, Jiwei Li

Abstract

Competitive programming remains one of the last few human strongholds in coding against AI. The best AI system to date still underperforms the best humans competitive programming: the most recent best result, Google's Gemini~3 Deep Think, attained 8th place even not being evaluated under live competition conditions. In this work, we introduce GrandCode, a multi-agent RL system designed for competitive programming. The capability of GrandCode is attributed to two key factors: (1) It orchestrates a variety of agentic modules (hypothesis proposal, solver, test generator, summarization, etc) and jointly improves them through post-training and online test-time RL; (2) We introduce Agentic GRPO specifically designed for multi-stage agent rollouts with delayed rewards and the severe off-policy drift that is prevalent in agentic RL. GrandCode is the first AI system that consistently beats all human participants in live contests of competitive programming: in the most recent three Codeforces live competitions, i.e., Round~1087 (Mar 21, 2026), Round~1088 (Mar 28, 2026), and Round~1089 (Mar 29, 2026), GrandCode placed first in all of them, beating all human participants, including legendary grandmasters. GrandCode shows that AI systems have reached a point where they surpass the strongest human programmers on the most competitive coding tasks.

GrandCode: Achieving Grandmaster Level in Competitive Programming via Agentic Reinforcement Learning

Abstract

Competitive programming remains one of the last few human strongholds in coding against AI. The best AI system to date still underperforms the best humans competitive programming: the most recent best result, Google's Gemini~3 Deep Think, attained 8th place even not being evaluated under live competition conditions. In this work, we introduce GrandCode, a multi-agent RL system designed for competitive programming. The capability of GrandCode is attributed to two key factors: (1) It orchestrates a variety of agentic modules (hypothesis proposal, solver, test generator, summarization, etc) and jointly improves them through post-training and online test-time RL; (2) We introduce Agentic GRPO specifically designed for multi-stage agent rollouts with delayed rewards and the severe off-policy drift that is prevalent in agentic RL. GrandCode is the first AI system that consistently beats all human participants in live contests of competitive programming: in the most recent three Codeforces live competitions, i.e., Round~1087 (Mar 21, 2026), Round~1088 (Mar 28, 2026), and Round~1089 (Mar 29, 2026), GrandCode placed first in all of them, beating all human participants, including legendary grandmasters. GrandCode shows that AI systems have reached a point where they surpass the strongest human programmers on the most competitive coding tasks.

Paper Structure

This paper contains 60 sections, 28 equations, 8 figures, 6 tables.

Figures (8)

  • Figure 2: Overview of the full pipeline. In post-training, we continue training on noisy competitive-programming data, perform supervised fine-tuning on reference solutions, train auxiliary hypothesis generation policy $\pi_{\mathrm{hypothesis}}$ and summarization policy $\pi_{\mathrm{summary}}$ and jointly optimize the system with multi-component RL. At test/online-contest time, the model uses direct generation for easy cases, and an online test-time RL loop for harder cases.
  • Figure 3: Hypothesis generation and small-scale verification. The agent first proposes a compact characterization ($k_{\max} = \max_v (\mathrm{last}(v) - \mathrm{first}(v))$), then generates small random instances, computes the exact answer via a brute-force solver that enumerates all tuples $(i,j,k)\in S(b)$, and compares against the hypothesized value. A mismatch triggers hypothesis revision and only validated hypotheses are promoted to solution synthesis.
  • Figure 4: Examples of contest figures whose visual structure is difficult to capture with text-only descriptions.
  • Figure 5: An illustration of pipelined context parallelism for one block with 3 DeltaNet layers (L1, L2, L3) + 1 softmax attention layer with 4 CP ranks and 4 micro-batches for illustration purposes. In the DeltaNet phase, each GPU processes micro-batches (MB) in a staggered pipeline, passing the recurrent state forward; startup and drain bubbles (gray) are confined to the triangular corners. The softmax attention layer is executed with synchronized all-to-all communication at full utilization.
  • Figure 6: Standings and submission pages for GrandCode in the three live Codeforces contests. The score corresponds to $S(\mathrm{joint})$, which is based on the full set of submissions in a single account.
  • ...and 3 more figures