Table of Contents
Fetching ...

ProCeedRL: Process Critic with Exploratory Demonstration Reinforcement Learning for LLM Agentic Reasoning

Jingyue Gao, Yanjiang Guo, Xiaoshuai Chen, Jianyu Chen

Abstract

Reinforcement Learning (RL) significantly enhances the reasoning abilities of large language models (LLMs), yet applying it to multi-turn agentic tasks remains challenging due to the long-horizon nature of interactions and the stochasticity of environmental feedback. We identify a structural failure mode in agentic exploration: suboptimal actions elicit noisy observations into misleading contexts, which further weaken subsequent decision-making, making recovery increasingly difficult. This cumulative feedback loop of errors renders standard exploration strategies ineffective and susceptible to the model's reasoning and the environment's randomness. To mitigate this issue, we propose ProCeedRL: Process Critic with Explorative Demonstration RL, shifting exploration from passive selection to active intervention. ProCeedRL employs a process-level critic to monitor interactions in real time, incorporating reflection-based demonstrations to guide agents in stopping the accumulation of errors. We find that this approach significantly exceeds the model's saturated exploration performance, demonstrating substantial exploratory benefits. By learning from exploratory demonstrations and on-policy samples, ProCeedRL significantly improves exploration efficiency and achieves superior performance on complex deep search and embodied tasks.

ProCeedRL: Process Critic with Exploratory Demonstration Reinforcement Learning for LLM Agentic Reasoning

Abstract

Reinforcement Learning (RL) significantly enhances the reasoning abilities of large language models (LLMs), yet applying it to multi-turn agentic tasks remains challenging due to the long-horizon nature of interactions and the stochasticity of environmental feedback. We identify a structural failure mode in agentic exploration: suboptimal actions elicit noisy observations into misleading contexts, which further weaken subsequent decision-making, making recovery increasingly difficult. This cumulative feedback loop of errors renders standard exploration strategies ineffective and susceptible to the model's reasoning and the environment's randomness. To mitigate this issue, we propose ProCeedRL: Process Critic with Explorative Demonstration RL, shifting exploration from passive selection to active intervention. ProCeedRL employs a process-level critic to monitor interactions in real time, incorporating reflection-based demonstrations to guide agents in stopping the accumulation of errors. We find that this approach significantly exceeds the model's saturated exploration performance, demonstrating substantial exploratory benefits. By learning from exploratory demonstrations and on-policy samples, ProCeedRL significantly improves exploration efficiency and achieves superior performance on complex deep search and embodied tasks.

Paper Structure

This paper contains 44 sections, 1 equation, 6 figures, 7 tables, 1 algorithm.

Figures (6)

  • Figure 1: In multi-turn agentic tasks, we find that noisy environmental feedback in the context can degrade a model’s reasoning ability (left), with weaker models being affected more severely (right). These two phenomena make recovery in multi-turn interactions extremely difficult, as accumulated contextual noise rapidly compounds and quickly weakens the model.
  • Figure 2: Comparison between standard independently repeated sampling (a) and ProCeedRL rollouts (b). Left: In vanilla exploration, the model's suboptimal action may result in irrelevant or misleading observations, which hinder all subsequent reasoning and the exploration of correct samples. Right: In ProCeedRL, a critic actively monitors the planning process. When an adverse action is detected due to faulty reasoning or low-quality returned observations, a refined demonstration replaces it to guide the agent out of the vicious circle in (a) and mitigate the exploration issue.
  • Figure 3: The overall workflow of our method with an example. (a) When the agent makes a suboptimal action, the observation reinforces this path and derails subsequent model reasoning, resulting in low exploration efficiency (Sec. \ref{['sec:method_problem']}). We employ a process-level critic to identify flawed steps, in which the agent refines its actions and reruns the adverse actions, thereby breaking the vicious circle and improving exploration and reasoning limits (Sec. \ref{['sec:method_framework']}). (b) After collecting trajectories with ProCeed rollout, we incorporate them with directly generated samples to form meaningful groups for subsequent policy optimization in (c), as detailedly described in Sec. \ref{['sec:method_theory']}.
  • Figure 4: Pass$@k$ of ProCeed and vanilla rollout. The x-axis denotes the number of equivalent vanilla samples in terms of generation. Our method significantly improves exploration efficiency, matching pass$@k$ with less computation. Notably, it exceeds the saturation ceiling of vanilla exploration with a few samples (denoted by stars).
  • Figure 5: Improvement of refined actions over the original ones. Our method improves suboptimal actions with low ratings, whereas improvements gradually diminish as better original actions.
  • ...and 1 more figures