Table of Contents
Fetching ...

OptiMer: Optimal Distribution Vector Merging Is Better than Data Mixing for Continual Pre-Training

Haiyue Song, Masao Utiyama

Abstract

Continual pre-training is widely used to adapt LLMs to target languages and domains, yet the mixture ratio of training data remains a sensitive hyperparameter that is expensive to tune: they must be fixed before training begins, and a suboptimal choice can waste weeks of compute. In this work, we propose OptiMer, which decouples ratio selection from training: we train one CPT model per dataset, extract each model's distribution vector, which represents the parameter shift induced by that dataset, and search for optimal composition weights post-hoc via Bayesian optimization. Experiments on Gemma 3 27B across languages (Japanese, Chinese) and domains (Math, Code) show that OptiMer consistently outperforms data mixture and model averaging baselines with 15-35 times lower search cost. Key findings reveal that 1) the optimized weights can be interpreted as data mixture ratios, and retraining with these ratios improves data mixture CPT, and 2) the same vector pool can be re-optimized for a given objective without any retraining, producing target-tailored models on demand. Our work establishes that data mixture ratio selection, traditionally a pre-training decision, can be reformulated as a post-hoc optimization over distribution vectors, offering a more flexible paradigm for continual pre-training.

OptiMer: Optimal Distribution Vector Merging Is Better than Data Mixing for Continual Pre-Training

Abstract

Continual pre-training is widely used to adapt LLMs to target languages and domains, yet the mixture ratio of training data remains a sensitive hyperparameter that is expensive to tune: they must be fixed before training begins, and a suboptimal choice can waste weeks of compute. In this work, we propose OptiMer, which decouples ratio selection from training: we train one CPT model per dataset, extract each model's distribution vector, which represents the parameter shift induced by that dataset, and search for optimal composition weights post-hoc via Bayesian optimization. Experiments on Gemma 3 27B across languages (Japanese, Chinese) and domains (Math, Code) show that OptiMer consistently outperforms data mixture and model averaging baselines with 15-35 times lower search cost. Key findings reveal that 1) the optimized weights can be interpreted as data mixture ratios, and retraining with these ratios improves data mixture CPT, and 2) the same vector pool can be re-optimized for a given objective without any retraining, producing target-tailored models on demand. Our work establishes that data mixture ratio selection, traditionally a pre-training decision, can be reformulated as a post-hoc optimization over distribution vectors, offering a more flexible paradigm for continual pre-training.

Paper Structure

This paper contains 43 sections, 4 equations, 12 figures, 10 tables, 1 algorithm.

Figures (12)

  • Figure 1: Data Mix vs. OptiMer. (a) Continual pre-training on a fixed data mixture requires the mixing ratios $\textcolor{#FD7758}{\{w_i\}}$ to be specified before training begins. Each attempt costs days to weeks of GPU time. (b) Our approach trains one CPT model per dataset independently and extracts a distribution vector $\tau_i$ from each, which are then composed via a merge function $\Phi$ (e.g. DARE-Linear) with weights $\textcolor{#FD7758}{\{\alpha_i\}}$ optimized post-hoc. Each trial completes in minutes. The instruction-tuned vector is additionally merged in both settings.
  • Figure 2: Computational cost comparison between data mixture CPT and OptiMer across different numbers of datasets during ratio optimization. Training cost excluded as it is identical for both approaches.
  • Figure 3: Pairwise cosine similarity and norm of distribution vectors.
  • Figure 4: PCA projection of distribution vectors with OptiMer merge weights in bar charts.
  • Figure 5: Distribution vector trajectories during CPT on 1B Japanese data, projected onto the same PCA space as used in Figure \ref{['fig:pca']}.
  • ...and 7 more figures