Table of Contents
Fetching ...

RAAP: Retrieval-Augmented Affordance Prediction with Cross-Image Action Alignment

Qiyuan Zhuang, He-Yang Xu, Yijun Wang, Xin-Yang Zhao, Yang-Yang Li, Xiu-Shen Wei

Abstract

Understanding object affordances is essential for enabling robots to perform purposeful and fine-grained interactions in diverse and unstructured environments. However, existing approaches either rely on retrieval, which is fragile due to sparsity and coverage gaps, or on large-scale models, which frequently mislocalize contact points and mispredict post-contact actions when applied to unseen categories, thereby hindering robust generalization. We introduce Retrieval-Augmented Affordance Prediction (RAAP), a framework that unifies affordance retrieval with alignment-based learning. By decoupling static contact localization and dynamic action direction, RAAP transfers contact points via dense correspondence and predicts action directions through a retrieval-augmented alignment model that consolidates multiple references with dual-weighted attention. Trained on compact subsets of DROID and HOI4D with as few as tens of samples per task, RAAP achieves consistent performance across unseen objects and categories, and enables zero-shot robotic manipulation in both simulation and the real world. Project website: https://github.com/SEU-VIPGroup/RAAP.

RAAP: Retrieval-Augmented Affordance Prediction with Cross-Image Action Alignment

Abstract

Understanding object affordances is essential for enabling robots to perform purposeful and fine-grained interactions in diverse and unstructured environments. However, existing approaches either rely on retrieval, which is fragile due to sparsity and coverage gaps, or on large-scale models, which frequently mislocalize contact points and mispredict post-contact actions when applied to unseen categories, thereby hindering robust generalization. We introduce Retrieval-Augmented Affordance Prediction (RAAP), a framework that unifies affordance retrieval with alignment-based learning. By decoupling static contact localization and dynamic action direction, RAAP transfers contact points via dense correspondence and predicts action directions through a retrieval-augmented alignment model that consolidates multiple references with dual-weighted attention. Trained on compact subsets of DROID and HOI4D with as few as tens of samples per task, RAAP achieves consistent performance across unseen objects and categories, and enables zero-shot robotic manipulation in both simulation and the real world. Project website: https://github.com/SEU-VIPGroup/RAAP.

Paper Structure

This paper contains 13 sections, 7 equations, 5 figures, 3 tables.

Figures (5)

  • Figure 2: When facing a novel task or unseen object category (e.g., "open the cabinet"), Retrieval-Augmented Affordance Prediction (RAAP) retrieves semantically related experiences (e.g., "opening a microwave") and transfers the corresponding affordances to guide execution.
  • Figure 3: Overview of the Retrieval-Augmented Affordance Prediction (RAAP) framework. (a) Pipeline. Given an RGB-D input and a task label, RAAP retrieves top-$K$ references from an affordance memory using CLIP-based similarity. It then predicts a 2D affordance (contact point and action direction) and lifts it to 3D for execution. (b) 2D Affordance Prediction. Static contact points are localized via dense correspondence using Stable Diffusion (SD) features, while dynamic action directions are inferred by a cross-image alignment module. Both query and reference images are encoded with a shared SigLIP-2 backbone; reference tokens are further modulated by their action vectors via FiLM, and fused with query tokens through gated cross-attention and a Transformer.
  • Figure 4: Qualitative comparison of 2D affordance predictions on the DROID dataset. The first row shows the input RGB image with ground-truth affordances (contact point and action direction), and the second row visualizes predictions from RAM, A0, and RAAP ($K=3$).
  • Figure 5: MAE $\downarrow$ of RAAP as the number of retrieved references $K$ varies from 0 to 4 on the DROID dataset.
  • Figure 6: Qualitative results on pickup kettle in MuJoCo with a UR5e manipulator. RAAP successfully transfers handle-oriented affordances to kettles.