Table of Contents
Fetching ...

Posterior Optimization with Clipped Objective for Bridging Efficiency and Stability in Generative Policy Learning

Yuhui Chen, Haoran Li, Zhennan Jiang, Yuxing Qin, Yuxuan Wan, Weiheng Liu, Dongbin Zhao

Abstract

Expressive generative models have advanced robotic manipulation by capturing complex, multi-modal action distributions over temporally extended trajectories. However, fine-tuning these policies via RL remains challenging due to instability and sample inefficiency. We introduce Posterior Optimization with Clipped Objective (POCO), a principled RL framework that formulates policy improvement as a posterior inference problem tailored for temporal action chunks. Through an Expectation-Maximization procedure, POCO distills a reward-weighted implicit posterior into the policy without likelihood estimation. Furthermore, POCO adopts an offline-to-online paradigm that anchors online exploration to pre-trained priors, and its model-agnostic design scales to fine-tune large VLA models without architectural modifications. Evaluations across 7 simulation benchmarks and 4 contact-rich real-world tasks demonstrate that POCO prevents catastrophic policy collapse, outperforms SOTA baselines, and achieves a 96.7% success rate on real-world tasks. Videos are available at our project website https://cccedric.github.io/poco/.

Posterior Optimization with Clipped Objective for Bridging Efficiency and Stability in Generative Policy Learning

Abstract

Expressive generative models have advanced robotic manipulation by capturing complex, multi-modal action distributions over temporally extended trajectories. However, fine-tuning these policies via RL remains challenging due to instability and sample inefficiency. We introduce Posterior Optimization with Clipped Objective (POCO), a principled RL framework that formulates policy improvement as a posterior inference problem tailored for temporal action chunks. Through an Expectation-Maximization procedure, POCO distills a reward-weighted implicit posterior into the policy without likelihood estimation. Furthermore, POCO adopts an offline-to-online paradigm that anchors online exploration to pre-trained priors, and its model-agnostic design scales to fine-tune large VLA models without architectural modifications. Evaluations across 7 simulation benchmarks and 4 contact-rich real-world tasks demonstrate that POCO prevents catastrophic policy collapse, outperforms SOTA baselines, and achieves a 96.7% success rate on real-world tasks. Videos are available at our project website https://cccedric.github.io/poco/.

Paper Structure

This paper contains 34 sections, 30 equations, 10 figures, 6 tables, 1 algorithm.

Figures (10)

  • Figure 1: Conceptual overview comparing typical RL paradigms against POCO. (Left) Off-policy methods enable efficient data reuse, while on-policy methods ensure stability. (Right) Schematic performance curves show that POCO is designed for sample-efficient, stable improvement.
  • Figure 2: Overview of our proposed framework. The learning paradigm consists of two main stages. (Left) First, the policy is pre-trained through supervised learning using expert demonstrations collected via tele-operation. (Right) Second, during online fine-tuning, the policy interacts with the environment and improves through an iterative E-M procedure. In the Implicit E-step, multiple candidate actions are sampled from the current actor and evaluated by the critic to obtain importance weights. In the M-step, these weighted samples are utilized to compute the POCO loss, updating the parameters to obtain the new actor.
  • Figure 3: Visualization of all simulation tasks. The simulation tasks includes: a) scene, b) puzzle-3x3, c) cube-double, d) cube-triple from OGBench ogbench and e) lift, f) can, g) square from RoboMimic robomimic.
  • Figure 4: Visualization of the real-world environment platform. During execution, an operator presses the green button to assign a sparse reward upon success, or the red button to immediately terminate dangerous trajectories for hardware safety.
  • Figure 5: Visualization of all real-world tasks. The real-world tasks includes: a) Pick Cube, b) Route Cable, c) Insert USB, d) Assemble SSD, e) Pick Pen. The camera perspectives used in the experiments differ from those used for visualization.
  • ...and 5 more figures