Data curation via joint example selection further accelerates multimodal learning
Talfan Evans, Nikhil Parthasarathy, Hamza Merzic, Olivier J. Henaff
TL;DR
The paper tackles the data-efficiency bottleneck in multimodal pretraining by introducing JEST, a batch-level data curation method that optimizes learnability across whole batches using a pretrained reference model. By decomposing batch loss into per-example terms, JEST uses a sequential Gibbs-like sampling strategy to assemble highly learnable sub-batches, and it employs online model approximation with multi-resolution training (Flexi-JEST) to keep scoring overhead practical. The results show significant improvements in training efficiency, achieving state-of-the-art performance with substantially fewer iterations and FLOPs, and reveal that data-quality bootstrapping—guiding large-scale training with small, well-curated references—can robustly enhance generalization. Collectively, the approach demonstrates that steering the data distribution online, rather than relying solely on static curated datasets, provides a powerful lever for scalable, multimodal foundation-model learning, with potential to simplify data curation pipelines and guide future scaling laws.
Abstract
Data curation is an essential component of large-scale pretraining. In this work, we demonstrate that jointly selecting batches of data is more effective for learning than selecting examples independently. Multimodal contrastive objectives expose the dependencies between data and thus naturally yield criteria for measuring the joint learnability of a batch. We derive a simple and tractable algorithm for selecting such batches, which significantly accelerate training beyond individually-prioritized data points. As performance improves by selecting from larger super-batches, we also leverage recent advances in model approximation to reduce the associated computational overhead. As a result, our approach--multimodal contrastive learning with joint example selection (JEST)--surpasses state-of-the-art models with up to 13$\times$ fewer iterations and 10$\times$ less computation. Essential to the performance of JEST is the ability to steer the data selection process towards the distribution of smaller, well-curated datasets via pretrained reference models, exposing the level of data curation as a new dimension for neural scaling laws.
