Table of Contents
Fetching ...

Continual Vision-Language Learning for Remote Sensing: Benchmarking and Analysis

Xingxing Weng, Ruifeng Ni, Chao Pang, XiangYu Hao, Yishan Wang, Xiaokang Zhang, Wei Xu, Gui-Song Xia

Abstract

Current remote sensing vision-language models (RS VLMs) demonstrate impressive performance in image interpretation but rely on static training data, limiting their ability to accommodate continuously emerging sensing modalities and downstream tasks. This exposes a fundamental challenge: enabling RS VLMs to continually adapt without catastrophic forgetting. Despite its practical importance, the continual learning capability of RS VLMs remains underexplored, and no dedicated benchmark currently exists. In this work, we present CLeaRS, a comprehensive benchmark for continual vision-language learning in remote sensing. CLeaRS comprises 10 curated subsets with over 207k image-text pairs, spanning diverse interpretation tasks, sensing modalities, and application scenarios. We further define three evaluation protocols: long-horizon, modality-incremental, and task-incremental settings, to systematically assess continual adaptation. Extensive benchmarking of diverse vision-language models reveals catastrophic forgetting across all settings. Moreover, representative continual learning methods, when adapted to RS VLMs, exhibit limited effectiveness in handling task, instruction, and modality transitions. Our findings underscore the need for developing continual learning methods tailored to RS VLMs.

Continual Vision-Language Learning for Remote Sensing: Benchmarking and Analysis

Abstract

Current remote sensing vision-language models (RS VLMs) demonstrate impressive performance in image interpretation but rely on static training data, limiting their ability to accommodate continuously emerging sensing modalities and downstream tasks. This exposes a fundamental challenge: enabling RS VLMs to continually adapt without catastrophic forgetting. Despite its practical importance, the continual learning capability of RS VLMs remains underexplored, and no dedicated benchmark currently exists. In this work, we present CLeaRS, a comprehensive benchmark for continual vision-language learning in remote sensing. CLeaRS comprises 10 curated subsets with over 207k image-text pairs, spanning diverse interpretation tasks, sensing modalities, and application scenarios. We further define three evaluation protocols: long-horizon, modality-incremental, and task-incremental settings, to systematically assess continual adaptation. Extensive benchmarking of diverse vision-language models reveals catastrophic forgetting across all settings. Moreover, representative continual learning methods, when adapted to RS VLMs, exhibit limited effectiveness in handling task, instruction, and modality transitions. Our findings underscore the need for developing continual learning methods tailored to RS VLMs.

Paper Structure

This paper contains 26 sections, 1 equation, 7 figures, 9 tables.

Figures (7)

  • Figure 1: The CLeaRS benchmark comprises 10 subsets that progressively cover diverse interpretation tasks, sensing modalities, and application scenarios, facilitating systematic investigation of continual vision-language learning behaviors in RS VLMs.
  • Figure 2: Statistics of the newly constructed subsets in CLeaRS. The remaining subsets are adapted from existing image-text datasets and detailed in Appendix.
  • Figure 3: Comparison of VHM with and without supervised fine-tuning in the task-incremental setting.
  • Figure 4: Analysis of factors contributing to forgetting in RS VLMs.
  • Figure 5: Prompts for referring expression generation in SAR and infrared imagery.
  • ...and 2 more figures