Table of Contents
Fetching ...

Reflective Context Learning: Studying the Optimization Primitives of Context Space

Nikita Vassilyev, William Berrios, Ruowang Zhang, Bo Han, Douwe Kiela, Shikib Mehri

Abstract

Generally capable agents must learn from experience in ways that generalize across tasks and environments. The fundamental problems of learning, including credit assignment, overfitting, forgetting, local optima, and high-variance learning signals, persist whether the learned object lies in parameter space or context space. While these challenges are well understood in classical machine learning optimization, they remain underexplored in context space, leading current methods to be fragmented and ad hoc. We present Reflective Context Learning (RCL), a unified framework for agents that learn through repeated interaction, reflection on behavior and failure modes, and iterative updates to context. In RCL, reflection converts trajectories and current context into a directional update signal analogous to gradients, while mutation applies that signal to improve future behavior in context space. We recast recent context-optimization approaches as instances of this shared learning problem and systematically extend them with classical optimization primitives, including batching, improved credit-assignment signal, auxiliary losses, failure replay, and grouped rollouts for variance reduction. On AppWorld, BrowseComp+, and RewardBench2, these primitives improve over strong baselines, with their relative importance shifting across task regimes. We further analyze robustness to initialization, the effects of batch size, sampling and curriculum strategy, optimizer-state variants, and the impact of allocating stronger or weaker models to different optimization components. Our results suggest that learning through context updates should be treated not as a set of isolated algorithms, but as an optimization problem whose mechanisms can be studied systematically and improved through transferable principles.

Reflective Context Learning: Studying the Optimization Primitives of Context Space

Abstract

Generally capable agents must learn from experience in ways that generalize across tasks and environments. The fundamental problems of learning, including credit assignment, overfitting, forgetting, local optima, and high-variance learning signals, persist whether the learned object lies in parameter space or context space. While these challenges are well understood in classical machine learning optimization, they remain underexplored in context space, leading current methods to be fragmented and ad hoc. We present Reflective Context Learning (RCL), a unified framework for agents that learn through repeated interaction, reflection on behavior and failure modes, and iterative updates to context. In RCL, reflection converts trajectories and current context into a directional update signal analogous to gradients, while mutation applies that signal to improve future behavior in context space. We recast recent context-optimization approaches as instances of this shared learning problem and systematically extend them with classical optimization primitives, including batching, improved credit-assignment signal, auxiliary losses, failure replay, and grouped rollouts for variance reduction. On AppWorld, BrowseComp+, and RewardBench2, these primitives improve over strong baselines, with their relative importance shifting across task regimes. We further analyze robustness to initialization, the effects of batch size, sampling and curriculum strategy, optimizer-state variants, and the impact of allocating stronger or weaker models to different optimization components. Our results suggest that learning through context updates should be treated not as a set of isolated algorithms, but as an optimization problem whose mechanisms can be studied systematically and improved through transferable principles.

Paper Structure

This paper contains 47 sections, 14 equations, 13 figures, 5 tables.

Figures (13)

  • Figure 1: The RCL optimization loop with primitives mapped to stages. A batch of $B$ tasks is sampled from $\mathcal{D}$ (with replay, §\ref{['sec:replay']}), each executed $G$ times (§\ref{['sec:batching']}). Failed traces are passed --- alongside dual-trace annotations (§\ref{['sec:credit']}) --- to a multi-head reflector (§\ref{['sec:aux']}) that produces per-trace diagnostics $\Delta_i$. The mutator $f$ aggregates these into structured edits to individual playbook entries $e_i$, conditioned on a rolling optimization state document (§\ref{['sec:momentum']}).
  • Figure 2: Learning dynamics on AppWorld dev (Gemini 3.1 Flash-Lite, 57 tasks). Solid lines: current TGC at each checkpoint. Dashed lines: recently solved rate (fraction of tasks solved $\geq$1$\times$ in the trailing 5 iterations). Colored shading: active instability --- the gap between the recently solved rate and current TGC, measuring tasks solved within the window but not currently retained. Gray shading: stale regressions --- the gap between the all-time per-example best-so-far and the recently solved rate, measuring tasks solved historically but not within the window. Stars: peak TGC achieved during training. Green verticals: first iteration at which every dev task has been solved at least once (full coverage).
  • Figure 3: Design choice analysis (Gemini 3.1 Flash-Lite).(a) Seed robustness on AppWorld Challenge: RCL converges to 72--76 TGC from all three seeds; ACE without primitives diverges from weaker initializations. (b) Reflector $\times$ mutator allocation across benchmarks. Performance depends on the interaction between both roles and the task regime; no single configuration dominates uniformly. (c) Per-trace vs. batched reflection with $B{=}3$. Batched reflection helps on harder tasks (AW Challenge, BC+) but hurts when failures are diverse.
  • Figure 4: Training dynamics for all primitives with a 5-iteration sliding window. Format follows Figure \ref{['fig:training-dynamics']}: solid = current TGC; dashed = recently solved rate (per-example union over 5 iterations); colored shading = active instability (solved within window but not now); gray shading = stale regressions (solved historically but not within window). This is the strictest recency condition and produces the widest active instability gaps.
  • Figure 5: Training dynamics with a 10-iteration sliding window. Tasks solved anywhere in the last 10 iterations count as recently solvable, shifting instability from the active to the stale category. Active gaps narrow relative to the 5-iteration view (Figure \ref{['fig:dynamics-w5']}), but the relative ordering of primitives is preserved.
  • ...and 8 more figures