A New Perspective on Shampoo's Preconditioner
Depen Morwani, Itai Shapira, Nikhil Vyas, Eran Malach, Sham Kakade, Lucas Janson
TL;DR
The paper establishes a precise theoretical link between Shampoo’s Kronecker preconditioner and the optimal Kronecker product approximation of the Hessian (GN) or Adagrad covariance. It proves that the square of Shampoo’s approximation corresponds to one round of power iteration starting from the identity, and shows this one-step Kron closely matches the optimal Kron factors across multiple datasets and architectures. The work further analyzes Hessian-approximation tricks—batch gradient averaging and empirical Fisher versus real labels—and empirically examines their impact, while discussing limitations in certain architectures (e.g., ViT) where the Kron gap widens. Overall, it reframes Shampoo’s approximation as a near-optimal Kron factorization, deepening understanding of second-order preconditioning and guiding practical usage of Hessian-based methods.
Abstract
Shampoo, a second-order optimization algorithm which uses a Kronecker product preconditioner, has recently garnered increasing attention from the machine learning community. The preconditioner used by Shampoo can be viewed either as an approximation of the Gauss--Newton component of the Hessian or the covariance matrix of the gradients maintained by Adagrad. We provide an explicit and novel connection between the $\textit{optimal}$ Kronecker product approximation of these matrices and the approximation made by Shampoo. Our connection highlights a subtle but common misconception about Shampoo's approximation. In particular, the $\textit{square}$ of the approximation used by the Shampoo optimizer is equivalent to a single step of the power iteration algorithm for computing the aforementioned optimal Kronecker product approximation. Across a variety of datasets and architectures we empirically demonstrate that this is close to the optimal Kronecker product approximation. Additionally, for the Hessian approximation viewpoint, we empirically study the impact of various practical tricks to make Shampoo more computationally efficient (such as using the batch gradient and the empirical Fisher) on the quality of Hessian approximation.
