Table of Contents
Fetching ...
Paper

SLOPE is Adaptive to Unknown Sparsity and Asymptotically Minimax

Abstract

We consider high-dimensional sparse regression problems in which we observe , where is an design matrix and is an -dimensional vector of independent Gaussian errors, each with variance . Our focus is on the recently introduced SLOPE estimator ((Bogdan et al., 2014)), which regularizes the least-squares estimates with the rank-dependent penalty , where is the th largest magnitude of the fitted coefficients. Under Gaussian designs, where the entries of are i.i.d.~, we show that SLOPE, with weights just about equal to ( is the th quantile of a standard normal and is a fixed number in ) achieves a squared error of estimation obeying as the dimension increases to , and where is an arbitrary small constant. This holds under a weak assumption on the -sparsity level, namely, and , and is sharp in the sense that this is the best possible error any estimator can achieve. A remarkable feature is that SLOPE does not require any knowledge of the degree of sparsity, and yet automatically adapts to yield optimal total squared errors over a wide range of -sparsity classes. We are not aware of any other estimator with this property.