Table of Contents
Fetching ...

TAG: Target-Agnostic Guidance for Stable Object-Centric Inference in Vision-Language-Action Models

Jiaying Zhou, Zhihao Zhan, Ruifeng Zhai, Qinhan Lyu, Hao Liu, Keze Wang, Liang Lin, Guangrun Wang

Abstract

Vision--Language--Action (VLA) policies have shown strong progress in mapping language instructions and visual observations to robotic actions, yet their reliability degrades in cluttered scenes with distractors. By analyzing failure cases, we find that many errors do not arise from infeasible motions, but from instance-level grounding failures: the policy often produces a plausible grasp trajectory that lands slightly off-target or even on the wrong object instance. To address this issue, we propose TAG (Target-Agnostic Guidance), a simple inference-time guidance mechanism that explicitly reduces distractor- and appearance-induced bias in VLA policies. Inspired by classifier-free guidance (CFG), TAG contrasts policy predictions under the original observation and an object-erased observation, and uses their difference as a residual steering signal that strengthens the influence of object evidence in the decision process. TAG does not require modifying the policy architecture and can be integrated with existing VLA policies with minimal training and inference changes. We evaluate TAG on standard manipulation benchmarks, including LIBERO, LIBERO-Plus, and VLABench, where it consistently improves robustness under clutter and reduces near-miss and wrong-object executions.

TAG: Target-Agnostic Guidance for Stable Object-Centric Inference in Vision-Language-Action Models

Abstract

Vision--Language--Action (VLA) policies have shown strong progress in mapping language instructions and visual observations to robotic actions, yet their reliability degrades in cluttered scenes with distractors. By analyzing failure cases, we find that many errors do not arise from infeasible motions, but from instance-level grounding failures: the policy often produces a plausible grasp trajectory that lands slightly off-target or even on the wrong object instance. To address this issue, we propose TAG (Target-Agnostic Guidance), a simple inference-time guidance mechanism that explicitly reduces distractor- and appearance-induced bias in VLA policies. Inspired by classifier-free guidance (CFG), TAG contrasts policy predictions under the original observation and an object-erased observation, and uses their difference as a residual steering signal that strengthens the influence of object evidence in the decision process. TAG does not require modifying the policy architecture and can be integrated with existing VLA policies with minimal training and inference changes. We evaluate TAG on standard manipulation benchmarks, including LIBERO, LIBERO-Plus, and VLABench, where it consistently improves robustness under clutter and reduces near-miss and wrong-object executions.

Paper Structure

This paper contains 42 sections, 9 equations, 8 figures, 7 tables.

Figures (8)

  • Figure 1: Qualitative comparison of original and erased visual observations. The top row displays original robotic manipulation trajectories from the Libero and VLABench datasets. The bottom row presents the corresponding sequences where the target objects (e.g., the bowl and apple) have been digitally erased.
  • Figure 2: Overview of the Target-Agnostic Guidance (TAG) framework. Our method consists of two core components. The top shows the Counterfactual Synthesis pipeline, which automatically erases the manipulated target from raw videos using target parsing, tracking, and inpainting to generate unconditional data. The bottom illustrates the Policy Architecture, where a dual-branch network processes both the original conditional input and the erased unconditional input. By extracting the residual guidance ($\Delta v$) between them, TAG effectively isolates background distractors, forcing the model to focus precisely on the target object and intended manipulation to generate accurate action sequences.
  • Figure 3: Attention map comparison between TAG and $\pi_{0.5}$. TAG (left) precisely grounds the attention on the target and ignores distractors, resulting in successful manipulation. $\pi_{0.5}$ (right) struggles with distractor competition, showing diffused attention that leads to task failure.
  • Figure 4: Comparison on the VLABench benchmark. For tasks involving highly similar distractors (e.g., "Please pick the poker 3 of diamonds"), baseline policies ($\pi_0$ and $\pi_{0.5}$) are misled by visually similar distractors and grasp the wrong cards. By leveraging the TAG strategy, our method effectively filters out these distractions, shifting attention precisely to the correct target object to prevent misjudgments and ensure an accurate grasp.
  • Figure 5: Visual comparison of different erase methods during inference in Libero. In the task "put both moka pots on the stove", we evaluate various strategies for constructing the unconditional observation to apply TAG. The sequence demonstrates that alternative methods, such as simple masking (Mask-gray/black), inpainting (Erase), and full blackout (Black), disrupt essential spatial priors or introduce visual artifacts, leading to manipulation failures (e.g., colliding with the distractor cup). In contrast, our Background strategy successfully provides a stable and effective unconditional input, enabling the policy to accurately place the moka pot on the stove.
  • ...and 3 more figures