Bounding the Optimal Performance of Online Randomized Primal-Dual Methods
Authors
Pan Xu
Abstract
The online randomized primal-dual method has widespread applications in online algorithm design and analysis. A key challenge is identifying an appropriate function space, , in which we search for an optimal updating function that yields the best possible lower bound on the competitiveness of a given algorithm. The choice of must balance two competing objectives: on one hand, it should impose sufficient simplifying conditions on to facilitate worst-case analysis and establish a valid lower bound; on the other hand, it should remain general enough to offer a broad selection of candidate functions. The tradeoff is that any additional constraints on that can facilitate competitive analysis may also lead to a suboptimal choice, weakening the resulting lower bound.
To address this challenge, we propose an auxiliary-LP-based framework capable of effectively approximating the best possible competitiveness achievable when applying the randomized primal-dual method to different function spaces. Specifically, we examine the framework introduced by Huang and Zhang (SICOMP 2024), which analyzes Stochastic Balance for vertex-weighted online matching with stochastic rewards. Our approach yields both lower and upper bounds on the best possible competitiveness attainable using the randomized primal-dual method for different choices of . Notably, we establish that Stochastic Balance achieves a competitiveness of at least for the problem (under equal vanishing probabilities), improving upon the previous bound of by Huang and Zhang (SICOMP 2024). Meanwhile, our analysis yields an upper bound of for a function space strictly larger than that considered in Huang and Zhang (SICOMP 2024).