Table of Contents
Fetching ...

Fast and Accurate Probing of In-Training LLMs' Downstream Performances

Zhichen Liu, Tianle Lun, Zhibin Wen, Hao An, Yulin Ou, Jianhui Xu, Hao Zhang, Wenyi Fang, Yang Zheng, Yang Xu

Abstract

The paradigm of scaling Large Language Models (LLMs) in both parameter size and test time has pushed the boundaries of AI capabilities, but at the cost of making the traditional generative evaluation paradigm prohibitively expensive, therefore making the latency of LLM's in-training downstream performance evaluation unbearable. However, simple metrics like training loss (perplexity) are not always correlated with downstream performance, as sometimes their trends diverge from the actual task outcomes. This dilemma calls for a method that is computationally efficient and sufficiently accurate in measuring model capabilities. To address this challenge, we introduce a new in-training evaluation paradigm that uses a lightweight probe for monitoring downstream performance. The probes take the internal representations of LLM checkpoints (during training) as input and directly predict the checkpoint's performance on downstream tasks measured by success probability (i.e., pass@1). We design several probe architectures, validating their effectiveness using the OLMo3-7B's checkpoints across a diverse set of downstream tasks. The probes can accurately predict a checkpoint's performance (with avg. AUROC$>$0.75), have decent generalizability across checkpoints (earlier predicts later), and reduce the computation latency from $\sim$1 hr (using conventional generative evaluation method) to $\sim$3 min. In sum, this work presents a practical and scalable in-training downstream evaluation paradigm, enabling a more agile, informed, and efficient LLM development process.

Fast and Accurate Probing of In-Training LLMs' Downstream Performances

Abstract

The paradigm of scaling Large Language Models (LLMs) in both parameter size and test time has pushed the boundaries of AI capabilities, but at the cost of making the traditional generative evaluation paradigm prohibitively expensive, therefore making the latency of LLM's in-training downstream performance evaluation unbearable. However, simple metrics like training loss (perplexity) are not always correlated with downstream performance, as sometimes their trends diverge from the actual task outcomes. This dilemma calls for a method that is computationally efficient and sufficiently accurate in measuring model capabilities. To address this challenge, we introduce a new in-training evaluation paradigm that uses a lightweight probe for monitoring downstream performance. The probes take the internal representations of LLM checkpoints (during training) as input and directly predict the checkpoint's performance on downstream tasks measured by success probability (i.e., pass@1). We design several probe architectures, validating their effectiveness using the OLMo3-7B's checkpoints across a diverse set of downstream tasks. The probes can accurately predict a checkpoint's performance (with avg. AUROC0.75), have decent generalizability across checkpoints (earlier predicts later), and reduce the computation latency from 1 hr (using conventional generative evaluation method) to 3 min. In sum, this work presents a practical and scalable in-training downstream evaluation paradigm, enabling a more agile, informed, and efficient LLM development process.

Paper Structure

This paper contains 49 sections, 12 equations, 4 figures, 10 tables.

Figures (4)

  • Figure 1: Workflow comparison between the current generative evaluation paradigm and the proposed probing evaluation paradigm for in-training downstream monitoring. Current paradigm suffers from high evaluation cost and excessive latency, resulting in delayed feedback to training progress. In contrast, probe can bypasses the generation process during the evaluation phase, thereby enabling rapid and timely assessment.
  • Figure 2: Demonstration of the working process of in-training downstream performance evaluation framework in training stage, and the structure of two lightweight probe models. Training would be applied only in specific checkpoints. Once a probe is trained, it could maintain predictability in the future checkpoints.
  • Figure 3: Cumulative time consumption for Probe Evaluation vs. Generative Evaluation on pre-training checkpoints (OLMo-3-Base).
  • Figure 4: Cumulative time consumption for Probe Evaluation vs. Generative Evaluation on post-training checkpoints for Instruct and Think models.