Table of Contents
Fetching ...

Batched Contextual Reinforcement: A Task-Scaling Law for Efficient Reasoning

Bangji Yang, Hongbo Ma, Jiajun Fan, Ge Liu

Abstract

Large Language Models employing Chain-of-Thought reasoning achieve strong performance but suffer from excessive token consumption that inflates inference costs. Existing efficiency methods such as explicit length penalties, difficulty estimators, or multi-stage curricula either degrade reasoning quality or require complex training pipelines. We introduce Batched Contextual Reinforcement, a minimalist, single-stage training paradigm that unlocks efficient reasoning through a simple structural modification: training the model to solve N problems simultaneously within a shared context window, rewarded purely by per-instance accuracy. This formulation creates an implicit token budget that yields several key findings: (1) We identify a novel task-scaling law: as the number of concurrent problems N increases during inference, per-problem token usage decreases monotonically while accuracy degrades far more gracefully than baselines, establishing N as a controllable throughput dimension. (2) BCR challenges the traditional accuracy-efficiency trade-off by demonstrating a "free lunch" phenomenon at standard single-problem inference. Across both 1.5B and 4B model families, BCR reduces token usage by 15.8% to 62.6% while consistently maintaining or improving accuracy across five major mathematical benchmarks. (3) Qualitative analyses reveal emergent self-regulated efficiency, where models autonomously eliminate redundant metacognitive loops without explicit length supervision. (4) Crucially, we empirically demonstrate that implicit budget constraints successfully circumvent the adversarial gradients and catastrophic optimization collapse inherent to explicit length penalties, offering a highly stable, constraint-based alternative for length control. These results prove BCR practical, showing simple structural incentives unlock latent high-density reasoning in LLMs.

Batched Contextual Reinforcement: A Task-Scaling Law for Efficient Reasoning

Abstract

Large Language Models employing Chain-of-Thought reasoning achieve strong performance but suffer from excessive token consumption that inflates inference costs. Existing efficiency methods such as explicit length penalties, difficulty estimators, or multi-stage curricula either degrade reasoning quality or require complex training pipelines. We introduce Batched Contextual Reinforcement, a minimalist, single-stage training paradigm that unlocks efficient reasoning through a simple structural modification: training the model to solve N problems simultaneously within a shared context window, rewarded purely by per-instance accuracy. This formulation creates an implicit token budget that yields several key findings: (1) We identify a novel task-scaling law: as the number of concurrent problems N increases during inference, per-problem token usage decreases monotonically while accuracy degrades far more gracefully than baselines, establishing N as a controllable throughput dimension. (2) BCR challenges the traditional accuracy-efficiency trade-off by demonstrating a "free lunch" phenomenon at standard single-problem inference. Across both 1.5B and 4B model families, BCR reduces token usage by 15.8% to 62.6% while consistently maintaining or improving accuracy across five major mathematical benchmarks. (3) Qualitative analyses reveal emergent self-regulated efficiency, where models autonomously eliminate redundant metacognitive loops without explicit length supervision. (4) Crucially, we empirically demonstrate that implicit budget constraints successfully circumvent the adversarial gradients and catastrophic optimization collapse inherent to explicit length penalties, offering a highly stable, constraint-based alternative for length control. These results prove BCR practical, showing simple structural incentives unlock latent high-density reasoning in LLMs.

Paper Structure

This paper contains 39 sections, 8 equations, 5 figures, 24 tables, 3 algorithms.

Figures (5)

  • Figure 1: A new scaling dimension: task-level inference scaling on OLYMPIAD. We vary the number of concurrent problems $N$. Left axis (bars): per-problem tokens. Right axis (lines): accuracy. The baseline (dark blue) reduces tokens as $N$ grows but suffers accuracy collapse. BCR (light blue) crystallizes this efficiency: ${\sim}$60% token reduction even at $N{=}1$, with graceful accuracy degradation. This reveals a task-scaling law: more concurrent problems $\Rightarrow$ more efficient reasoning.
  • Figure 2: Overview of BCR. We package $N$ questions into a problem group with a system instruction and shared token budget. The model generates a single completion solving all $N$ problems sequentially. Per-problem answers are extracted via a stack-based parser for accuracy verification, combined with a format reward. Training follows standard GRPO---no length penalties or auxiliary models required.
  • Figure 3: Efficiency-Accuracy Pareto Frontier on Minerva. The trajectories show checkpoint evaluations during the training process. The final models (stars) demonstrate that BCR consistently pushes the Pareto frontier significantly toward higher accuracy and lower token usage for both model families.
  • Figure 4: Training group size ablation. All models trained for 300 steps and evaluated at $N{=}1$. $N{=}3$ provides the best accuracy-efficiency trade-off. See Appendix \ref{['sec:extended_results']} for MATH-500 and Olympiad.
  • Figure 5: Extended Efficiency-Accuracy Pareto Frontiers. Training trajectories on AIME25, AMC23, MATH-500, and Olympiad. The arrows track intermediate model checkpoints during BCR optimization, demonstrating a continuous, stable shift toward lower token consumption and competitive accuracy. The final BCR models (stars) consistently dominate the Pareto frontier relative to baseline models of comparable or larger scale.