Table of Contents
Fetching ...

PRAISE: Prefix-Based Rollout Reuse in Agentic Search Training

Erhan Zhang, Yiqun Chen, Zechun Niu, Wei Yang, Xiaochi Wei, Yan Gao, Yi Wu, Yao Hu, Jiaxin Mao

Abstract

In agentic search, large language models (LLMs) are trained to perform multi-turn retrieval and reasoning for complex tasks such as multi-hop question answering (QA). However, current search-based Reinforcement Learning (RL) methods suffer from two core limitations: expensive long-horizon rollouts are under-utilized during training, and supervision is typically available only at the final answer, resulting in severe reward sparsity. We present Prefix-based Rollout reuse for Agentic search with Intermediate Step rEwards (PRAISE), a framework for improving both data efficiency and credit assignment in agentic search training. Given a complete search trajectory, PRAISE extracts prefix states at different search turns, elicits intermediate answers from them, and uses these prefixes both to construct additional training trajectories and to derive step-level rewards from performance differences across prefixes. Our method uses a single shared model for both search policy learning and prefix answer evaluation, enabling joint optimization without extra human annotations or a separate reward model. Experiments on multi-hop QA benchmarks show that PRAISE consistently improves performance over strong baselines.

PRAISE: Prefix-Based Rollout Reuse in Agentic Search Training

Abstract

In agentic search, large language models (LLMs) are trained to perform multi-turn retrieval and reasoning for complex tasks such as multi-hop question answering (QA). However, current search-based Reinforcement Learning (RL) methods suffer from two core limitations: expensive long-horizon rollouts are under-utilized during training, and supervision is typically available only at the final answer, resulting in severe reward sparsity. We present Prefix-based Rollout reuse for Agentic search with Intermediate Step rEwards (PRAISE), a framework for improving both data efficiency and credit assignment in agentic search training. Given a complete search trajectory, PRAISE extracts prefix states at different search turns, elicits intermediate answers from them, and uses these prefixes both to construct additional training trajectories and to derive step-level rewards from performance differences across prefixes. Our method uses a single shared model for both search policy learning and prefix answer evaluation, enabling joint optimization without extra human annotations or a separate reward model. Experiments on multi-hop QA benchmarks show that PRAISE consistently improves performance over strong baselines.

Paper Structure

This paper contains 33 sections, 15 equations, 11 figures, 2 tables.

Figures (11)

  • Figure 1: Overview of PRAISE. Left: Main Search Rollout. The policy performs multi-turn search and produces a complete trajectory with a final answer. Middle: Prefix Answering. PRAISE extracts prefix states and generates an intermediate answer from each prefix. Right: Reward Assignment and Joint optimization. Prefix answers are scored against the ground-truth answer, step rewards are computed from adjacent score differences, and the final answer receives a terminal reward. All resulting samples are jointly used for PPO training.
  • Figure : (a) Turn 0
  • Figure : (a) 7B model on F1
  • Figure : (a) Turn 0
  • Figure : (b) Turn 1
  • ...and 6 more figures