Table of Contents
Fetching ...
Paper

Near-Optimal Algorithms for Omniprediction

Abstract

Omnipredictors are simple prediction functions that encode loss-minimizing predictions with respect to a hypothesis class , simultaneously for every loss function within a class of losses . In this work, we give near-optimal learning algorithms for omniprediction, in both the online and offline settings. To begin, we give an oracle-efficient online learning algorithm that acheives -omniprediction with regret for any class of Lipschitz loss functions . Quite surprisingly, this regret bound matches the optimal regret for \emph{minimization of a single loss function} (up to a factor). Given this online algorithm, we develop an online-to-offline conversion that achieves near-optimal complexity across a number of measures. In particular, for all bounded loss functions within the class of Bounded Variation losses (which include all convex, all Lipschitz, and all proper losses) and any (possibly-infinite) , we obtain an offline learning algorithm that, leveraging an (offline) ERM oracle and samples from , returns an efficient -omnipredictor for scaling near-linearly in the Rademacher complexity of a class derived from by taking convex combinations of a fixed number of elements of .