Advancing Cross-domain Discriminability in Continual Learning of Vision-Language Models
Yicheng Xu, Yuxin Chen, Jiahao Nie, Yusong Wang, Huiping Zhuang, Manabu Okumura
TL;DR
The paper tackles cross-domain continual learning for Vision-Language Models by addressing forgetting and the erosion of zero-shot capabilities. It introduces Regression-based Analytic Incremental Learning (RAIL), a ridge-regression adapter with primal and dual update forms, plus a training-free fusion module to preserve zero-shot performance on unseen domains, enabling absolute memorization of learned domains. A new X-TAIL setting is proposed to evaluate cross-domain discriminability without domain hints, alongside MTIL comparisons, with theoretical guarantees and empirical SOTA results across 10 domains and 1,100 classes. The approach demonstrates efficient, data-free incremental adaptation of pre-trained VLMs, improving cross-domain discriminability while maintaining zero-shot transfer, which is highly relevant for deployment in dynamic, multi-domain environments.
Abstract
Continual learning (CL) with Vision-Language Models (VLMs) has overcome the constraints of traditional CL, which only focuses on previously encountered classes. During the CL of VLMs, we need not only to prevent the catastrophic forgetting on incrementally learned knowledge but also to preserve the zero-shot ability of VLMs. However, existing methods require additional reference datasets to maintain such zero-shot ability and rely on domain-identity hints to classify images across different domains. In this study, we propose Regression-based Analytic Incremental Learning (RAIL), which utilizes a recursive ridge regression-based adapter to learn from a sequence of domains in a non-forgetting manner and decouple the cross-domain correlations by projecting features to a higher-dimensional space. Cooperating with a training-free fusion module, RAIL absolutely preserves the VLM's zero-shot ability on unseen domains without any reference data. Additionally, we introduce Cross-domain Task-Agnostic Incremental Learning (X-TAIL) setting. In this setting, a CL learner is required to incrementally learn from multiple domains and classify test images from both seen and unseen domains without any domain-identity hint. We theoretically prove RAIL's absolute memorization on incrementally learned domains. Experiment results affirm RAIL's state-of-the-art performance in both X-TAIL and existing Multi-domain Task-Incremental Learning settings. The code is released at https://github.com/linghan1997/Regression-based-Analytic-Incremental-Learning.
