Table of Contents
Fetching ...

A New Perspective on Shampoo's Preconditioner

Depen Morwani, Itai Shapira, Nikhil Vyas, Eran Malach, Sham Kakade, Lucas Janson

TL;DR

The paper establishes a precise theoretical link between Shampoo’s Kronecker preconditioner and the optimal Kronecker product approximation of the Hessian (GN) or Adagrad covariance. It proves that the square of Shampoo’s approximation corresponds to one round of power iteration starting from the identity, and shows this one-step Kron closely matches the optimal Kron factors across multiple datasets and architectures. The work further analyzes Hessian-approximation tricks—batch gradient averaging and empirical Fisher versus real labels—and empirically examines their impact, while discussing limitations in certain architectures (e.g., ViT) where the Kron gap widens. Overall, it reframes Shampoo’s approximation as a near-optimal Kron factorization, deepening understanding of second-order preconditioning and guiding practical usage of Hessian-based methods.

Abstract

Shampoo, a second-order optimization algorithm which uses a Kronecker product preconditioner, has recently garnered increasing attention from the machine learning community. The preconditioner used by Shampoo can be viewed either as an approximation of the Gauss--Newton component of the Hessian or the covariance matrix of the gradients maintained by Adagrad. We provide an explicit and novel connection between the $\textit{optimal}$ Kronecker product approximation of these matrices and the approximation made by Shampoo. Our connection highlights a subtle but common misconception about Shampoo's approximation. In particular, the $\textit{square}$ of the approximation used by the Shampoo optimizer is equivalent to a single step of the power iteration algorithm for computing the aforementioned optimal Kronecker product approximation. Across a variety of datasets and architectures we empirically demonstrate that this is close to the optimal Kronecker product approximation. Additionally, for the Hessian approximation viewpoint, we empirically study the impact of various practical tricks to make Shampoo more computationally efficient (such as using the batch gradient and the empirical Fisher) on the quality of Hessian approximation.

A New Perspective on Shampoo's Preconditioner

TL;DR

The paper establishes a precise theoretical link between Shampoo’s Kronecker preconditioner and the optimal Kronecker product approximation of the Hessian (GN) or Adagrad covariance. It proves that the square of Shampoo’s approximation corresponds to one round of power iteration starting from the identity, and shows this one-step Kron closely matches the optimal Kron factors across multiple datasets and architectures. The work further analyzes Hessian-approximation tricks—batch gradient averaging and empirical Fisher versus real labels—and empirically examines their impact, while discussing limitations in certain architectures (e.g., ViT) where the Kron gap widens. Overall, it reframes Shampoo’s approximation as a near-optimal Kron factorization, deepening understanding of second-order preconditioning and guiding practical usage of Hessian-based methods.

Abstract

Shampoo, a second-order optimization algorithm which uses a Kronecker product preconditioner, has recently garnered increasing attention from the machine learning community. The preconditioner used by Shampoo can be viewed either as an approximation of the Gauss--Newton component of the Hessian or the covariance matrix of the gradients maintained by Adagrad. We provide an explicit and novel connection between the Kronecker product approximation of these matrices and the approximation made by Shampoo. Our connection highlights a subtle but common misconception about Shampoo's approximation. In particular, the of the approximation used by the Shampoo optimizer is equivalent to a single step of the power iteration algorithm for computing the aforementioned optimal Kronecker product approximation. Across a variety of datasets and architectures we empirically demonstrate that this is close to the optimal Kronecker product approximation. Additionally, for the Hessian approximation viewpoint, we empirically study the impact of various practical tricks to make Shampoo more computationally efficient (such as using the batch gradient and the empirical Fisher) on the quality of Hessian approximation.

Paper Structure

This paper contains 26 sections, 16 theorems, 41 equations, 5 figures, 1 table.

Key Result

Lemma 1

$(A \otimes B) \operatorname{vec}(G) = \operatorname{vec}(BGA^\top)$.

Figures (5)

  • Figure 1: Top: Cosine similarity between different approximations of the Gauss--Newton (GN) component of the Hessian and its true value for different datasets and architectures. Bottom: Similar plot showing the cosine similarity between different approximations of the Adagrad preconditioner matrix and its true value. As can be seen, $\text{Shampoo}^2$ tracks the optimal Kronecker approximation much more closely than Shampoo does. MNIST-2 refers to a binary subsampled MNIST dataset. For more details about datasets and architectures, please refer to Appendix \ref{['app:exp']}.
  • Figure 2: Comparing $\frac{\sigma_1}{\sqrt{\sum_i \sigma_i^2}}$ and $\frac{\alpha_1 \sigma_1}{\sqrt{\sum_i \alpha_i^2 \sigma_i^2}}$ for various datasets and architectures. The top row is for $H = H_{\text{GN}}$ while the bottom row is for $H = H_{\text{Ada}}$. The $L$ and $R$ legends represent $\frac{\alpha_1 \sigma_1}{\sqrt{\sum_i \alpha_i^2 \sigma_i^2}}$ for the left and right singular vector respectively. The "Optimal Kronecker" legend represents $\frac{\sigma_1}{\sqrt{\sum_i \sigma_i^2}}$ (see Section \ref{['sec:whyI']}). As seen, $\frac{\alpha_1 \sigma_1}{\sqrt{\sum_i \alpha_i^2 \sigma_i^2}}$ is much closer to $1$ as compared to $\frac{\sigma_1}{\sqrt{\sum_i \sigma_i^2}}$, demonstrating the role played by identity initialization in ensuring convergence of power iteration in one round. See Appendix \ref{['app:fig_details']} for details.
  • Figure 3: Cosine similarity between approximations of $H_{\text{GN}}$ and its true value. First row is for batch size 1 while the second row is for batch size 256. We observe deterioration in approximation quality at larger batch size. We note that the batch size does not refer to the batch size used in optimization, rather it refers to the batch size used for Hessian approximation.
  • Figure 4: Cosine similarity between different approximations of the Gauss--Newton (GN) component of the Hessian and its true value for different datasets and architectures. As can be seen, $\text{Shampoo}^2$ tracks the optimal Kronecker approximation much more closely than Shampoo. These plots also include the K-FAC approximation, and we note that $\text{Shampoo}^2$ always outperforms K-FAC, though they are close in some settings.
  • Figure 5: Analogue of Figure \ref{['fig:main']} for ViT architecture and the CIFAR-5m dataset for 3 layers of the network. For some of the figures we observe relatively larger gaps between $\text{Shampoo}^2$ and optimal Kronecker approximation.

Theorems & Definitions (22)

  • Lemma 1: kronecker
  • Lemma 2: gupta2018shampoo
  • Lemma 3: Adapted from gupta2018shampooanil2021towards
  • Lemma 4: approximation_with_kronecker
  • Corollary 1
  • Proposition 1
  • proof
  • Lemma 4: van1993approximation
  • Proposition 2
  • Lemma 4
  • ...and 12 more