Table of Contents
Fetching ...

Read More, Think More: Revisiting Observation Reduction for Web Agents

Masafumi Enomoto, Ryoma Obara, Haochen Zhang, Masafumi Oyamada

Abstract

Web agents based on large language models (LLMs) rely on observations of web pages -- commonly represented as HTML -- as the basis for identifying available actions and planning subsequent steps. Prior work has treated the verbosity of HTML as an obstacle to performance and adopted observation reduction as a standard practice. We revisit this trend and demonstrate that the optimal observation representation depends on model capability and thinking token budget: (1) compact observations (accessibility trees) are preferable for lower-capability models, while detailed observations (HTML) are advantageous for higher-capability models; moreover, increasing thinking tokens further amplifies the benefit of HTML. (2) Our error analysis suggests that higher-capability models exploit layout information in HTML for better action grounding, while lower-capability models suffer from increased hallucination under longer inputs. We also find that incorporating observation history improves performance across most models and settings, and a diff-based representation offers a token-efficient alternative. Based on these findings, we suggest practical guidelines: adaptively select observation representations based on model capability and thinking token budget, and incorporate observation history using diff-based representations.

Read More, Think More: Revisiting Observation Reduction for Web Agents

Abstract

Web agents based on large language models (LLMs) rely on observations of web pages -- commonly represented as HTML -- as the basis for identifying available actions and planning subsequent steps. Prior work has treated the verbosity of HTML as an obstacle to performance and adopted observation reduction as a standard practice. We revisit this trend and demonstrate that the optimal observation representation depends on model capability and thinking token budget: (1) compact observations (accessibility trees) are preferable for lower-capability models, while detailed observations (HTML) are advantageous for higher-capability models; moreover, increasing thinking tokens further amplifies the benefit of HTML. (2) Our error analysis suggests that higher-capability models exploit layout information in HTML for better action grounding, while lower-capability models suffer from increased hallucination under longer inputs. We also find that incorporating observation history improves performance across most models and settings, and a diff-based representation offers a token-efficient alternative. Based on these findings, we suggest practical guidelines: adaptively select observation representations based on model capability and thinking token budget, and incorporate observation history using diff-based representations.

Paper Structure

This paper contains 9 sections, 3 figures, 5 tables.

Figures (3)

  • Figure 1: Relationship between observation representation and task success rate (WorkArena L1). The x-axis shows the average number of input tokens per step; the y-axis shows the task success rate. "h" denotes HTML and "a" denotes the accessibility tree (a11y). Arrows indicate the change from HTML to a11y within the same model. Left: For lower-capability (open-source) models, reducing the observation (h $\rightarrow$ a) improves the success rate. Right: For higher-capability (proprietary) models, the success rate decreases instead.
  • Figure 2: Grounding error counts and their breakdown for a11y and HTML across models (WorkArena L1).
  • Figure 3: Relationship between action repetition rate and task success rate (WorkArena L1). The x-axis shows the rate of actions identical to the previous step; the y-axis shows the task success rate. The number on each data point indicates the length of observation history included as input.