Table of Contents
Fetching ...

Beyond Attention Magnitude: Leveraging Inter-layer Rank Consistency for Efficient Vision-Language-Action Models

Peiju Liu, Jinming Liu, Xipeng Qiu, Xuanjing Huang

Abstract

Vision-Language-Action (VLA) models excel in robotic manipulation but suffer from significant inference latency due to processing dense visual tokens. Existing token reduction methods predominantly rely on attention magnitude as a static selection. In this work, we challenge this assumption, revealing that high-attention tokens are task-dependent and can even degrade policy performance. To address this, we introduce \textbf{TIES} (\textbf{T}au-guided \textbf{I}nter-layer \textbf{E}fficient \textbf{S}election), a dynamic framework guided by inter-layer token ranking consistency. By adaptively balancing attention magnitude with ranking consistency, TIES ensures robust token selection without requiring additional training. On the CogACT + SIMPLER benchmark, TIES improves average success rates by 6\% while reducing token usage by 78\%, and demonstrate strong generalization across diverse decoders and benchmarks.

Beyond Attention Magnitude: Leveraging Inter-layer Rank Consistency for Efficient Vision-Language-Action Models

Abstract

Vision-Language-Action (VLA) models excel in robotic manipulation but suffer from significant inference latency due to processing dense visual tokens. Existing token reduction methods predominantly rely on attention magnitude as a static selection. In this work, we challenge this assumption, revealing that high-attention tokens are task-dependent and can even degrade policy performance. To address this, we introduce \textbf{TIES} (\textbf{T}au-guided \textbf{I}nter-layer \textbf{E}fficient \textbf{S}election), a dynamic framework guided by inter-layer token ranking consistency. By adaptively balancing attention magnitude with ranking consistency, TIES ensures robust token selection without requiring additional training. On the CogACT + SIMPLER benchmark, TIES improves average success rates by 6\% while reducing token usage by 78\%, and demonstrate strong generalization across diverse decoders and benchmarks.

Paper Structure

This paper contains 19 sections, 1 equation, 7 figures, 2 tables, 1 algorithm.

Figures (7)

  • Figure 1: Informative vs. misleading high-attention tokens. Patches with high attention magnitude are unshaded, with the top 5 highlighted in red. (a) Successful case: high-attention tokens effectively localize task-relevant information. (b) Failure case: high-attention tokens focus on spurious features, leading to policy errors. Our approach introduces a consistency-based indicator to distinguish these scenarios and adaptively select tokens.
  • Figure 2: TIES framework. TIES dynamically computes the Kendall $\tau$ and adaptively decides the token selection strategy.
  • Figure 3: Performance of Top‑k and Bottom‑k strategies. We observe performance inversion in the Drawer task while conventional attention importance holds in the MoveNear task
  • Figure 4: Entropy distribution.
  • Figure 5: Kendall $\tau$ distribution.
  • ...and 2 more figures