Table of Contents
Fetching ...

When Users Change Their Mind: Evaluating Interruptible Agents in Long-Horizon Web Navigation

Henry Peng Zou, Chunyu Miao, Wei-Chieh Huang, Yankai Chen, Yue Zhou, Hanrong Zhang, Yaozu Wu, Liancheng Fang, Zhengyao Gu, Zhen Zhang, Kening Zheng, Fangxin Wang, Yi Nian, Shanghao Li, Wenzhe Fan, Langzhou He, Weizhi Zhang, Xue Liu, Philip S. Yu

Abstract

As LLM agents transition from short, static problem solving to executing complex, long-horizon tasks in dynamic environments, the ability to handle user interruptions, such as adding requirement or revising goals, during mid-task execution is becoming a core requirement for realistic deployment. However, existing benchmarks largely assume uninterrupted agent behavior or study interruptions only in short, unconstrained language tasks. In this paper, we present the first systematic study of interruptible agents in long-horizon, environmentally grounded web navigation tasks, where actions induce persistent state changes. We formalize three realistic interruption types, including addition, revision, and retraction, and introduce InterruptBench, a benchmark derived from WebArena-Lite that synthesizes high-quality interruption scenarios under strict semantic constraints. Using a unified interruption simulation framework, we evaluate six strong LLM backbones across single- and multi-turn interruption settings, analyzing both their effectiveness in adapting to updated intents and their efficiency in recovering from mid-task changes. Our results show that handling user interruptions effectively and efficiently during long-horizon agentic tasks remains challenging for powerful large-scale LLMs. Code and dataset are available at https://github.com/HenryPengZou/InterruptBench.

When Users Change Their Mind: Evaluating Interruptible Agents in Long-Horizon Web Navigation

Abstract

As LLM agents transition from short, static problem solving to executing complex, long-horizon tasks in dynamic environments, the ability to handle user interruptions, such as adding requirement or revising goals, during mid-task execution is becoming a core requirement for realistic deployment. However, existing benchmarks largely assume uninterrupted agent behavior or study interruptions only in short, unconstrained language tasks. In this paper, we present the first systematic study of interruptible agents in long-horizon, environmentally grounded web navigation tasks, where actions induce persistent state changes. We formalize three realistic interruption types, including addition, revision, and retraction, and introduce InterruptBench, a benchmark derived from WebArena-Lite that synthesizes high-quality interruption scenarios under strict semantic constraints. Using a unified interruption simulation framework, we evaluate six strong LLM backbones across single- and multi-turn interruption settings, analyzing both their effectiveness in adapting to updated intents and their efficiency in recovering from mid-task changes. Our results show that handling user interruptions effectively and efficiently during long-horizon agentic tasks remains challenging for powerful large-scale LLMs. Code and dataset are available at https://github.com/HenryPengZou/InterruptBench.

Paper Structure

This paper contains 35 sections, 1 equation, 4 figures, 5 tables.

Figures (4)

  • Figure 1: InterruptBench setup and evaluation. We evaluate an agent operating in the WebArena environment under user-driven interruptions at dynamic timesteps, where the user may update, revise, or retract parts of the original request.
  • Figure 2: Budget-limited success rate as a function of post-interruption action budget $k$ across three interruption scenarios. Solid lines denote runs that receive mid-task interruption updates, while dashed lines represent matched no-interruption runs that do not receive updates but are evaluated against the final intent, serving as a lower-bound reference for no-update behavior under the same budget.
  • Figure 3: Post-update success curves $\mathrm{SR}_m(k\mid n)$ for $n\in\{1,2,3\}$ interruptions, aligned at the latest update (so $k$ counts actions after the final interruption). Stage1/2/3 correspond to $n{=}1/2/3$. The figure shows how additional updates shift success upward under a fixed post-update budget, with diminishing returns as curves approach a model-specific plateau.
  • Figure F.1: User interruption with additional intent in WebArena. The agent initially plans a walking vs. driving comparison from Pittsburgh Downtown → Carnegie Mellon University, but a mid-trajectory user interrupt adds a new constraint (start from Randyland); we contrast an error trace that fails to fully retract/update the plan with a correct trace that revises the origin, recomputes routes, and returns the updated time difference.