Table of Contents
Fetching ...

CLaD: Planning with Grounded Foresight via Cross-Modal Latent Dynamics

Andrew Jeong, Jaemin Kim, Sebin Lee, Sung-Eui Yoon

Abstract

Robotic manipulation involves kinematic and semantic transitions that are inherently coupled via underlying actions. However, existing approaches plan within either semantic or latent space without explicitly aligning these cross-modal transitions. To address this, we propose CLaD, a framework that models how proprioceptive and semantic states jointly evolve under actions through asymmetric cross-attention that allows kinematic transitions to query semantic ones. CLaD predicts grounded latent foresights via self-supervised objectives with EMA target encoders and auxiliary reconstruction losses, preventing representation collapse while anchoring predictions to observable states. Predicted foresights are modulated with observations to condition a diffusion policy for action generation. On LIBERO-LONG benchmark, CLaD achieves 94.7\% success rate, competitive with large VLAs with significantly fewer parameters.

CLaD: Planning with Grounded Foresight via Cross-Modal Latent Dynamics

Abstract

Robotic manipulation involves kinematic and semantic transitions that are inherently coupled via underlying actions. However, existing approaches plan within either semantic or latent space without explicitly aligning these cross-modal transitions. To address this, we propose CLaD, a framework that models how proprioceptive and semantic states jointly evolve under actions through asymmetric cross-attention that allows kinematic transitions to query semantic ones. CLaD predicts grounded latent foresights via self-supervised objectives with EMA target encoders and auxiliary reconstruction losses, preventing representation collapse while anchoring predictions to observable states. Predicted foresights are modulated with observations to condition a diffusion policy for action generation. On LIBERO-LONG benchmark, CLaD achieves 94.7\% success rate, competitive with large VLAs with significantly fewer parameters.

Paper Structure

This paper contains 7 sections, 6 figures, 2 tables.

Figures (6)

  • Figure 1: Overview of CLaD. (a) Conventional approaches either generate semantic artifacts (e.g., subgoal images or texts), or plan in unimodal latent spaces that lack cross-modal understanding. (b) CLaD learns cross-modal latent dynamics to predict grounded latent foresights, which condition a diffusion policy for action generation. CLaD achieves 94.7% with only 0.66B parameters, competitive with OpenVLA (7B) and $\pi_{0.5}$ (3.3B).
  • Figure 2: Pixel attribution for predicted latent foresight via Integrated Gradients. Heatmaps show pixel-level contributions toward the alignment between predicted foresight $\hat{\mathbf{z}}^{t+\tau}$ and target embedding. Brighter regions indicate higher attribution scores. While not yielding precise object boundaries, attributions consistently highlight task-relevant objects, suggesting that the model leverages semantic features for future state prediction.
  • Figure 3: Demonstrations on task 1, 2, 3 of LIBERO-LONG.
  • Figure 4: Demonstrations on task 4, 5, 6 of LIBERO-LONG.
  • Figure 5: Demonstrations on task 7, 8, 9 of LIBERO-LONG.
  • ...and 1 more figures