Table of Contents
Fetching ...

Learning to Learn-at-Test-Time: Language Agents with Learnable Adaptation Policies

Zhanzhi Lou, Hui Chen, Yibo Li, Qian Wang, Bryan Hooi

Abstract

Test-Time Learning (TTL) enables language agents to iteratively refine their performance through repeated interactions with the environment at inference time. At the core of TTL is an adaptation policy that updates the actor policy based on experience from previous episodes, thereby improving future behavior. Existing methods rely on fixed, hand-crafted adaptation policies rather than optimizing them for downstream improvement. We argue that optimal adaptation policies should be learned from task environments, not hand-engineered based on human intuition. To achieve this, we introduce Meta-TTL, a framework that formulates the discovery of effective adaptation policies as a bi-level optimization problem. Within this framework, the inner loop executes the standard TTL process, measuring how effectively a candidate adaptation policy helps an agent correct errors across sequential episodes. Guided by the agent's performance, the outer loop employs evolutionary search over a diverse distribution of training tasks to iteratively refine the adaptation policy. We evaluate Meta-TTL on Jericho and WebArena-Lite across both in-distribution (ID) and out-of-distribution (OOD) settings, using multiple meta-agent backbones. Results on both benchmarks show that Meta-TTL consistently outperforms hand-crafted baselines, suggesting that the optimized adaptation policy encodes transferable strategies that generalize beyond the training task distribution.

Learning to Learn-at-Test-Time: Language Agents with Learnable Adaptation Policies

Abstract

Test-Time Learning (TTL) enables language agents to iteratively refine their performance through repeated interactions with the environment at inference time. At the core of TTL is an adaptation policy that updates the actor policy based on experience from previous episodes, thereby improving future behavior. Existing methods rely on fixed, hand-crafted adaptation policies rather than optimizing them for downstream improvement. We argue that optimal adaptation policies should be learned from task environments, not hand-engineered based on human intuition. To achieve this, we introduce Meta-TTL, a framework that formulates the discovery of effective adaptation policies as a bi-level optimization problem. Within this framework, the inner loop executes the standard TTL process, measuring how effectively a candidate adaptation policy helps an agent correct errors across sequential episodes. Guided by the agent's performance, the outer loop employs evolutionary search over a diverse distribution of training tasks to iteratively refine the adaptation policy. We evaluate Meta-TTL on Jericho and WebArena-Lite across both in-distribution (ID) and out-of-distribution (OOD) settings, using multiple meta-agent backbones. Results on both benchmarks show that Meta-TTL consistently outperforms hand-crafted baselines, suggesting that the optimized adaptation policy encodes transferable strategies that generalize beyond the training task distribution.

Paper Structure

This paper contains 26 sections, 5 equations, 3 figures, 5 tables, 2 algorithms.

Figures (3)

  • Figure 1: Adaptation policies determine how the agent uses its experience up to episode $k$ to update the actor before episode $k+1$. Existing methods use a fixed adaptation rule, whereas Meta-TTL learns the adaptation policy across tasks and applies it zero-shot at test time.
  • Figure 2: Overview of Meta-TTL. Outer loop (meta-training): A proposer LM reflects and proposes candidate meta-prompts, which are validated locally and globally before entering a per-task expert pool. After training, a single optimized meta-prompt $\phi^*$ is selected from this pool. Inner loop (test-time learning): The meta-agent, governed by $\phi^*$, observes the actor's trajectory after each episode and generates verbal feedback that rewrites the actor's system prompt for the next attempt.
  • Figure 3: Per-episode score trajectories on the six Jericho evaluation games. Meta-TTL exhibits clearer upward trends across episodes than the baselines, supporting W-AUC as a metric of sustained test-time improvement.