Table of Contents
Fetching ...

daVinci-LLM:Towards the Science of Pretraining

Yiwei Qin, Yixiu Liu, Tiantian Mi, Muhang Xie, Zhen Huang, Weiye Si, Pengrui Lu, Siyuan Feng, Xia Wu, Liming Liu, Ye Luo, Jinlong Hou, Qipeng Guo, Yu Qiao, Pengfei Liu

Abstract

The foundational pretraining phase determines a model's capability ceiling, as post-training struggles to overcome capability foundations established during pretraining, yet it remains critically under-explored. This stems from a structural paradox: organizations with computational resources operate under commercial pressures that inhibit transparent disclosure, while academic institutions possess research freedom but lack pretraining-scale computational resources. daVinci-LLM occupies this unexplored intersection, combining industrial-scale resources with full research freedom to advance the science of pretraining. We adopt a fully-open paradigm that treats openness as scientific methodology, releasing complete data processing pipelines, full training processes, and systematic exploration results. Recognizing that the field lacks systematic methodology for data processing, we employ the Data Darwinism framework, a principled L0-L9 taxonomy from filtering to synthesis. We train a 3B-parameter model from random initialization across 8T tokens using a two-stage adaptive curriculum that progressively shifts from foundational capabilities to reasoning-intensive enhancement. Through 200+ controlled ablations, we establish that: processing depth systematically enhances capabilities, establishing it as a critical dimension alongside volume scaling; different domains exhibit distinct saturation dynamics, necessitating adaptive strategies from proportion adjustments to format shifts; compositional balance enables targeted intensification while preventing performance collapse; how evaluation protocol choices shape our understanding of pretraining progress. By releasing the complete exploration process, we enable the community to build upon our findings and systematic methodologies to form accumulative scientific knowledge in pretraining.

daVinci-LLM:Towards the Science of Pretraining

Abstract

The foundational pretraining phase determines a model's capability ceiling, as post-training struggles to overcome capability foundations established during pretraining, yet it remains critically under-explored. This stems from a structural paradox: organizations with computational resources operate under commercial pressures that inhibit transparent disclosure, while academic institutions possess research freedom but lack pretraining-scale computational resources. daVinci-LLM occupies this unexplored intersection, combining industrial-scale resources with full research freedom to advance the science of pretraining. We adopt a fully-open paradigm that treats openness as scientific methodology, releasing complete data processing pipelines, full training processes, and systematic exploration results. Recognizing that the field lacks systematic methodology for data processing, we employ the Data Darwinism framework, a principled L0-L9 taxonomy from filtering to synthesis. We train a 3B-parameter model from random initialization across 8T tokens using a two-stage adaptive curriculum that progressively shifts from foundational capabilities to reasoning-intensive enhancement. Through 200+ controlled ablations, we establish that: processing depth systematically enhances capabilities, establishing it as a critical dimension alongside volume scaling; different domains exhibit distinct saturation dynamics, necessitating adaptive strategies from proportion adjustments to format shifts; compositional balance enables targeted intensification while preventing performance collapse; how evaluation protocol choices shape our understanding of pretraining progress. By releasing the complete exploration process, we enable the community to build upon our findings and systematic methodologies to form accumulative scientific knowledge in pretraining.

Paper Structure

This paper contains 89 sections, 13 figures, 10 tables.

Figures (13)

  • Figure 1: Performance comparison of daVinci-LLM-3B against baseline models with score across three capability domains, and overall score comparable to OLMo-3-7B.
  • Figure 2: Evolution of pretraining research depth across institutional structures. The y-axis represents research depth from surface-level artifacts (API-only access) to fundamental scientific questions. The x-axis shows the temporal progression from 2022 to 2026. Commercial entities (blue) possess computational resources but remain constrained to API-level access due to competitive pressures. Open-weight releases (green, e.g., Llama, Qwen) provide model artifacts but withhold design rationale and negative results. Academic efforts (orange, e.g., OLMo) achieve transparency and research freedom but face severe scale limitations—making systematic exploration with 200+ configurations structurally infeasible. The top tier remains largely unexplored, as it requires the rare alignment of large-scale computational resources with the research freedom to publish comprehensive findings. daVinci-LLM (purple) occupies this intersection, conducting the extensive ablations and systematic disclosures necessary to advance the science of pretraining. By releasing the complete decision-making logic alongside the model weights, we bridge the structural gap between industrial scale and scientific transparency.
  • Figure 3: Mapping of our pretraining data sources onto the Data Darwinism L0--L9 taxonomy across different training stage.
  • Figure 4: Progressive training results across Stage 1-1 and Stage 1-2, with checkpoints evaluated every 5000 steps. The vertical dashed line indicate the boundary between two substages.
  • Figure 5: Stage 1 Training dynamics for the first 300k steps: (a) training loss curve demonstrating consistent convergence, and (b) gradient norm curve tracking optimization stability across the initial training phase.
  • ...and 8 more figures