Table of Contents
Fetching ...

Is One-Shot In-Context Learning Helpful for Data Selection in Task-Specific Fine-Tuning of Multimodal LLMs?

Xiao An, Jiaxing Sun, Ting Hu, Wei He

Abstract

Injecting world knowledge into pretrained multimodal large language models (MLLMs) is essential for domain-specific applications. Task-specific fine-tuning achieves this by tailoring MLLMs to high-quality in-domain data but encounters scalability challenges as datasets grow, necessitating a trade-off between performance and computational overhead. Existing data selection methods rely on additional scoring models or heuristic clustering, failing to concentrate on both data importance and diversity. Moreover, both methods overlook the interplay among training samples. To address these limitations, we propose CLIPPER, a training-free data selection pipeline that separates parameter and world knowledge, and leverages in-context learning to probe model responses to different demonstration-query combinations. CLIPPER identifies coresets that mirror the original dataset's perplexity distribution, preserving critical samples while maintaining diversity. Experiments on two MLLMs and three datasets show that CLIPPER matches full fine-tuning performance with significantly lower costs: Qwen2.5-VL-7B attains 47% data efficiency on VRSBench, and Llama-3.2-11B-Vision-Instruct reduces ScienceQA training time by 37%.

Is One-Shot In-Context Learning Helpful for Data Selection in Task-Specific Fine-Tuning of Multimodal LLMs?

Abstract

Injecting world knowledge into pretrained multimodal large language models (MLLMs) is essential for domain-specific applications. Task-specific fine-tuning achieves this by tailoring MLLMs to high-quality in-domain data but encounters scalability challenges as datasets grow, necessitating a trade-off between performance and computational overhead. Existing data selection methods rely on additional scoring models or heuristic clustering, failing to concentrate on both data importance and diversity. Moreover, both methods overlook the interplay among training samples. To address these limitations, we propose CLIPPER, a training-free data selection pipeline that separates parameter and world knowledge, and leverages in-context learning to probe model responses to different demonstration-query combinations. CLIPPER identifies coresets that mirror the original dataset's perplexity distribution, preserving critical samples while maintaining diversity. Experiments on two MLLMs and three datasets show that CLIPPER matches full fine-tuning performance with significantly lower costs: Qwen2.5-VL-7B attains 47% data efficiency on VRSBench, and Llama-3.2-11B-Vision-Instruct reduces ScienceQA training time by 37%.

Paper Structure

This paper contains 17 sections, 7 equations, 5 figures, 4 tables, 1 algorithm.

Figures (5)

  • Figure 1: Method comparisons of (a) importance-based, (b) diversity-based, and (c) our CLIPPER. CLIPPER requires only a two-stage inference, eliminating the need for additional fine-tuning and heuristic clustering.
  • Figure 2: Performance of Qwen2.5-VL-7B on VRSBench. The total training size is fixed and the ratio of parameter- and world-knowledge samples (P-x/W-y) is varied.
  • Figure 3: Two-stage inference workflow for CLIPPER data selection. The original datasets are divided into four subsets for diverse combinations.
  • Figure 4: Perplexity distribution for Qwen2.5-VL-7B (top row) and Llama-3.2-11B-Vision-Instruct (bottom row) on VRSBench (left column), ScienceQA (middle column) and A-OKVQA (right column)
  • Figure 5: Perplexity distribution for Qwen2.5-VL-7B (top row) and Llama-3.2-11B-Vision-Instruct (bottom row) on VRSBench (left column), ScienceQA (middle column) and A-OKVQA (right column)