Table of Contents
Fetching ...

MD tree: a model-diagnostic tree grown on loss landscape

Yefan Zhou, Jianlong Chen, Qinxue Cao, Konstantin Schürholt, Yaoqing Yang

TL;DR

This paper proposes a diagnosis method called MD tree based on loss landscape metrics and experimentally demonstrates its advantage over classical validation-based approaches, and verifies the effectiveness of MD tree in multiple practical scenarios.

Abstract

This paper considers "model diagnosis", which we formulate as a classification problem. Given a pre-trained neural network (NN), the goal is to predict the source of failure from a set of failure modes (such as a wrong hyperparameter, inadequate model size, and insufficient data) without knowing the training configuration of the pre-trained NN. The conventional diagnosis approach uses training and validation errors to determine whether the model is underfitting or overfitting. However, we show that rich information about NN performance is encoded in the optimization loss landscape, which provides more actionable insights than validation-based measurements. Therefore, we propose a diagnosis method called MD tree based on loss landscape metrics and experimentally demonstrate its advantage over classical validation-based approaches. We verify the effectiveness of MD tree in multiple practical scenarios: (1) use several models trained on one dataset to diagnose a model trained on another dataset, essentially a few-shot dataset transfer problem; (2) use small models (or models trained with small data) to diagnose big models (or models trained with big data), essentially a scale transfer problem. In a dataset transfer task, MD tree achieves an accuracy of 87.7%, outperforming validation-based approaches by 14.88%. Our code is available at https://github.com/YefanZhou/ModelDiagnosis.

MD tree: a model-diagnostic tree grown on loss landscape

TL;DR

This paper proposes a diagnosis method called MD tree based on loss landscape metrics and experimentally demonstrates its advantage over classical validation-based approaches, and verifies the effectiveness of MD tree in multiple practical scenarios.

Abstract

This paper considers "model diagnosis", which we formulate as a classification problem. Given a pre-trained neural network (NN), the goal is to predict the source of failure from a set of failure modes (such as a wrong hyperparameter, inadequate model size, and insufficient data) without knowing the training configuration of the pre-trained NN. The conventional diagnosis approach uses training and validation errors to determine whether the model is underfitting or overfitting. However, we show that rich information about NN performance is encoded in the optimization loss landscape, which provides more actionable insights than validation-based measurements. Therefore, we propose a diagnosis method called MD tree based on loss landscape metrics and experimentally demonstrate its advantage over classical validation-based approaches. We verify the effectiveness of MD tree in multiple practical scenarios: (1) use several models trained on one dataset to diagnose a model trained on another dataset, essentially a few-shot dataset transfer problem; (2) use small models (or models trained with small data) to diagnose big models (or models trained with big data), essentially a scale transfer problem. In a dataset transfer task, MD tree achieves an accuracy of 87.7%, outperforming validation-based approaches by 14.88%. Our code is available at https://github.com/YefanZhou/ModelDiagnosis.

Paper Structure

This paper contains 70 sections, 8 equations, 28 figures, 4 tables.

Figures (28)

  • Figure 1: (Overview of our model-diagnostic framework using MD tree). This framework is designed to analyze and diagnose NNs where the training configuration is unknown. By examining the loss landscape structure of a given trained model, MD tree can identify potential failure sources of suboptimal performance.
  • Figure 2: (MD tree based on loss landscape structure of trained models). A DT using loss landscape metrics to determine distinct regimes of model configurations such that each regime has the same root cause of failure. The tree hierarchy is fixed, while the decision threshold is trained in a few-shot manner. Part of the tree hierarchy is selected using ideas from yang2021taxonomizing, which suggests a "multi-regime" structure of the hyperparameter space.
  • Figure 3: (Comparing MD tree to baseline methods on Q1 tasks with dataset and scale transfer). The $y$-axis indicates the diagnosis accuracy. (a) The $x$-axis indicates the number of pre-trained models used for building the training set. (b) The $x$-axis indicates the maximum amount of training (image) data for training models in the training set. (c) The $x$-axis indicates the maximum number of parameters of the models in the training set.
  • Figure 4: MD tree for Q1
  • Figure 6: MD tree for Q2
  • ...and 23 more figures