Table of Contents
Fetching ...

Beyond Stochastic Exploration: What Makes Training Data Valuable for Agentic Search

Chuzhan Hao, Wenfeng Feng, Guochao Jiang, Guofeng Quan, Guohua Liu, Yuewei Zhang

Abstract

Reinforcement learning (RL) has become an effective approach for advancing the reasoning capabilities of large language models (LLMs) through the strategic integration of external search engines. However, current RL-based search agents often rely on a process of stochastic exploration guided by carefully crafted outcome rewards, leading to inefficient reasoning trajectories and unstable training. To address these issues, we propose a novel framework, Hierarchical Experience (HiExp), to enhance the performance and training stability of search agents. Specifically, we extract empirical knowledge through contrastive analysis and a multi-level clustering mechanism, transforming raw reasoning trajectories into hierarchical experience knowledge. By leveraging experience-aligned training, we effectively regularize stochastic exploration, evolving it into a strategic and experience-driven search process. Extensive evaluations on multiple complex agentic search and mathematical reasoning benchmarks demonstrate that our approach not only achieves substantial performance gains but also exhibits strong cross-task and cross-algorithm generalization.

Beyond Stochastic Exploration: What Makes Training Data Valuable for Agentic Search

Abstract

Reinforcement learning (RL) has become an effective approach for advancing the reasoning capabilities of large language models (LLMs) through the strategic integration of external search engines. However, current RL-based search agents often rely on a process of stochastic exploration guided by carefully crafted outcome rewards, leading to inefficient reasoning trajectories and unstable training. To address these issues, we propose a novel framework, Hierarchical Experience (HiExp), to enhance the performance and training stability of search agents. Specifically, we extract empirical knowledge through contrastive analysis and a multi-level clustering mechanism, transforming raw reasoning trajectories into hierarchical experience knowledge. By leveraging experience-aligned training, we effectively regularize stochastic exploration, evolving it into a strategic and experience-driven search process. Extensive evaluations on multiple complex agentic search and mathematical reasoning benchmarks demonstrate that our approach not only achieves substantial performance gains but also exhibits strong cross-task and cross-algorithm generalization.

Paper Structure

This paper contains 22 sections, 3 equations, 4 figures, 13 tables, 1 algorithm.

Figures (4)

  • Figure 1: Comparison between stochastic exploration and experience-guided exploration. Experience-driven guidance facilitates more efficient reasoning trajectories, endowing LLMs with superior problem-solving capabilities for complex tasks.
  • Figure 2: Overview of the offline hierarchical experience construction and the experience-guided policy optimization framework. The hierarchy spans from atomic instances to strategic principles, providing multi-granularity guidance for the search agent. During the training process, strategy-based experiences are leveraged to guide initial planning, while case-based experiences are employed to provide fine-grained support for intermediate reasoning steps.
  • Figure 3: Training stability analysis of HiExp on multi-step retrieval benchmarks. Backbone denotes the performance of the base model trained via GRPO.
  • Figure 4: Overview of the distribution of query complexity over five multi-hop QA datasets.