Table of Contents
Fetching ...

GoldiCLIP: The Goldilocks Approach for Balancing Explicit Supervision for Language-Image Pretraining

Deen Dayal Mohan, Hossein Souri, Vitali Petsiuk, Juhong Min, Gopal Sharma, Luowei Zhou, Suren Kumar

Abstract

Until recently, the success of large-scale vision-language models (VLMs) has primarily relied on billion-sample datasets, posing a significant barrier to progress. Latest works have begun to close this gap by improving supervision quality, but each addresses only a subset of the weaknesses in contrastive pretraining. We present GoldiCLIP, a framework built on a Goldilocks principle of finding the right balance of supervision signals. Our multifaceted training framework synergistically combines three key innovations: (1) a text-conditioned self-distillation method to align both text-agnostic and text-conditioned features; (2) an encoder integrated decoder with Visual Question Answering (VQA) objective that enables the encoder to generalize beyond the caption-like queries; and (3) an uncertainty-based weighting mechanism that automatically balances all heterogeneous losses. Trained on just 30 million images, 300x less data than leading methods, GoldiCLIP achieves state-of-the-art among data-efficient approaches, improving over the best comparable baseline by 2.2 points on MSCOCO retrieval, 2.0 on fine-grained retrieval, and 5.9 on question-based retrieval, while remaining competitive with billion-scale models. Project page: https://petsi.uk/goldiclip.

GoldiCLIP: The Goldilocks Approach for Balancing Explicit Supervision for Language-Image Pretraining

Abstract

Until recently, the success of large-scale vision-language models (VLMs) has primarily relied on billion-sample datasets, posing a significant barrier to progress. Latest works have begun to close this gap by improving supervision quality, but each addresses only a subset of the weaknesses in contrastive pretraining. We present GoldiCLIP, a framework built on a Goldilocks principle of finding the right balance of supervision signals. Our multifaceted training framework synergistically combines three key innovations: (1) a text-conditioned self-distillation method to align both text-agnostic and text-conditioned features; (2) an encoder integrated decoder with Visual Question Answering (VQA) objective that enables the encoder to generalize beyond the caption-like queries; and (3) an uncertainty-based weighting mechanism that automatically balances all heterogeneous losses. Trained on just 30 million images, 300x less data than leading methods, GoldiCLIP achieves state-of-the-art among data-efficient approaches, improving over the best comparable baseline by 2.2 points on MSCOCO retrieval, 2.0 on fine-grained retrieval, and 5.9 on question-based retrieval, while remaining competitive with billion-scale models. Project page: https://petsi.uk/goldiclip.

Paper Structure

This paper contains 26 sections, 8 equations, 7 figures, 9 tables.

Figures (7)

  • Figure 1: An overview of GoldiCLIP approach. Input images, captions, and auxiliary textual data (such as VQA) are processed by the student and teacher models, which consist of a text encoder and a vision encoder each. The teacher model is constructed as an EMA of a student, and is used to produce robust and stable global image representations which serve as a target for the local representations of the student model following the self-distillation approach (Sec \ref{['subsec:selfdistillation']}). In our framework, the text embedding is contrastively aligned not only with the standard image embedding but also with the text-conditioned image embedding (Sec \ref{['subsec:contrastive_objectives']}). Multimodal decoder uses image patch tokens along with the textual tokens from the text encoder to perform generative tasks (such as VQA) (Sec \ref{['subsec:decoder_objectives']}). Finally, all objectives are automatically weighted using a task-balancing approach (Sec \ref{['subsec:task_balancing']}).
  • Figure 2: Example from RetVQA penamakuri2023answer, where the task is to retrieve relevant images for a question that can be used as context for generative models. For each query, there are two relevant images and a set of distractors. Unlike FLAIR xiao2025flair, our model correctly retrieves images containing the door knob and the train that are relevant to the query.
  • Figure 3: Two people sitting, one on a wheelchair and the other on a bench.
  • Figure 4: A young man jumping to catch a frisbee in the front yard.
  • Figure 6: Values of task coefficients changing over training epochs.
  • ...and 2 more figures