Table of Contents
Fetching ...

Label What Matters: Modality-Balanced and Difficulty-Aware Multimodal Active Learning

Yuqiao Zeng, Xu Wang, Tengfei Liang, Yiqing Hao, Yi Jin, Hui Yu

Abstract

Multimodal learning integrates complementary information from different modalities such as image, text, and audio to improve model performance, but its success relies on large-scale labeled data, which is costly to obtain. Active learning (AL) mitigates this challenge by selectively annotating informative samples. In multimodal settings, many approaches implicitly assume that modality importance is stable across rounds and keep selection rules fixed at the fusion stage, which leaves them insensitive to the dynamic nature of multimodal learning, where the relative value of modalities and the difficulty of instances shift as training proceeds. To address this issue, we propose RL-MBA, a reinforcement-learning framework for modality-balanced, difficulty-aware multimodal active learning. RL-MBA models sample selection as a Markov Decision Process, where the policy adapts to modality contributions, uncertainty, and diversity, and the reward encourages accuracy gains and balance. Two key components drive this adaptability: (1) Adaptive Modality Contribution Balancing (AMCB), which dynamically adjusts modality weights via reinforcement feedback, and (2) Evidential Fusion for DifficultyAware Policy Adjustment (EFDA), which estimates sample difficulty via uncertainty-based evidential fusion to prioritize informative samples. Experiments on Food101, KineticsSound, and VGGSound demonstrate that RL-MBA consistently outperforms strong baselines, improving both classification accuracy and modality fairness under limited labeling budgets.

Label What Matters: Modality-Balanced and Difficulty-Aware Multimodal Active Learning

Abstract

Multimodal learning integrates complementary information from different modalities such as image, text, and audio to improve model performance, but its success relies on large-scale labeled data, which is costly to obtain. Active learning (AL) mitigates this challenge by selectively annotating informative samples. In multimodal settings, many approaches implicitly assume that modality importance is stable across rounds and keep selection rules fixed at the fusion stage, which leaves them insensitive to the dynamic nature of multimodal learning, where the relative value of modalities and the difficulty of instances shift as training proceeds. To address this issue, we propose RL-MBA, a reinforcement-learning framework for modality-balanced, difficulty-aware multimodal active learning. RL-MBA models sample selection as a Markov Decision Process, where the policy adapts to modality contributions, uncertainty, and diversity, and the reward encourages accuracy gains and balance. Two key components drive this adaptability: (1) Adaptive Modality Contribution Balancing (AMCB), which dynamically adjusts modality weights via reinforcement feedback, and (2) Evidential Fusion for DifficultyAware Policy Adjustment (EFDA), which estimates sample difficulty via uncertainty-based evidential fusion to prioritize informative samples. Experiments on Food101, KineticsSound, and VGGSound demonstrate that RL-MBA consistently outperforms strong baselines, improving both classification accuracy and modality fairness under limited labeling budgets.

Paper Structure

This paper contains 22 sections, 12 equations, 7 figures, 3 tables, 1 algorithm.

Figures (7)

  • Figure 1: Fixed rules vs. adaptive strategy in multimodal active learning. The unlabeled pool contains items with different difficulty levels and different dominant modality. Left (1–2): fixed strategies keep selecting hard, dominant-modality samples and do not adjust when the value of each modality or the difficulty of samples changes across rounds. Right (3–4): an adaptive strategy reweights modality contributions and difficulty, yielding more balanced, budget-efficient batches.
  • Figure 2: Overview of RL-MBA. Each round consists of (1) multimodal fusion and clustering, (2) evidential uncertainty and difficulty estimation, (3) scoring to form a candidate set, (4) policy-based set selection, (5) retraining, and (6) reward and policy update.
  • Figure 3: Core components. (a) AMCB updates simplex weights $\boldsymbol{w}$ from round-level modality gains. (b) EFDA fuses Dirichlet evidence to obtain calibrated uncertainty/difficulty; the RL reward updates the selection policy (and optionally uncertainty heads if enabled in implementation).
  • Figure 4: The example from Food101, KineticsSound and VGGSound
  • Figure 5: Performance comparison between the proposed method and other conventional AL strategies on Food101, KineticsSound, and VGGSound. The metric selected is top-1 accuracy (Top-1) on multimodal and unimodal classification.
  • ...and 2 more figures