Table of Contents
Fetching ...

Adaptive Fully Dynamic $k$-Center Clustering with (Near-)Optimal Worst-Case Guarantees

Mara Grilnberger, Antonis Skarlatos

Abstract

Given a sequence of adversarial point insertions and point deletions, is it possible to simultaneously optimize the approximation ratio, update time, and recourse for a $k$-clustering problem? If so, can this be achieved with worst-case guarantees against an adaptive adversary? These questions have garnered significant attention in recent years. Prior works by Bhattacharya, Costa, Garg, Lattanzi, and Parotsidis [FOCS '24] and by Bhattacharya, Costa, and Farokhnejad [STOC '25] have taken significant steps toward this direction for the $k$-median clustering problem and its generalization, the $(k, z)$-clustering problem. In this paper, we study the $k$-center clustering problem, which is one of the most classical and well-studied $k$-clustering problems. Recently, Bhattacharya, Costa, Farokhnejad, Lattanzi, and Parotsidis [ICML '25] provided an affirmative answer to the first question for the $k$-center clustering problem. However, their work did not resolve the second question, as their result provides only expected amortized guarantees against an oblivious adversary. In this work, we make significant progress and close the gap by answering both questions in the affirmative. Specifically, we show that the fully dynamic $k$-center clustering problem admits a constant-factor approximation, near-optimal worst-case update time, and constant worst-case recourse, even against an adaptive adversary. This is achieved by first developing a fully dynamic bicriteria approximation algorithm with (near-)optimal worst-case bounds, and then designing a suitable fully dynamic $k$-center algorithm with near-linear update time. For the fully dynamic bicriteria approximation algorithm, we establish the worst-case recourse and worst-case update time guarantees separately, and then merge them into a single algorithm through a simple yet elegant process.

Adaptive Fully Dynamic $k$-Center Clustering with (Near-)Optimal Worst-Case Guarantees

Abstract

Given a sequence of adversarial point insertions and point deletions, is it possible to simultaneously optimize the approximation ratio, update time, and recourse for a -clustering problem? If so, can this be achieved with worst-case guarantees against an adaptive adversary? These questions have garnered significant attention in recent years. Prior works by Bhattacharya, Costa, Garg, Lattanzi, and Parotsidis [FOCS '24] and by Bhattacharya, Costa, and Farokhnejad [STOC '25] have taken significant steps toward this direction for the -median clustering problem and its generalization, the -clustering problem. In this paper, we study the -center clustering problem, which is one of the most classical and well-studied -clustering problems. Recently, Bhattacharya, Costa, Farokhnejad, Lattanzi, and Parotsidis [ICML '25] provided an affirmative answer to the first question for the -center clustering problem. However, their work did not resolve the second question, as their result provides only expected amortized guarantees against an oblivious adversary. In this work, we make significant progress and close the gap by answering both questions in the affirmative. Specifically, we show that the fully dynamic -center clustering problem admits a constant-factor approximation, near-optimal worst-case update time, and constant worst-case recourse, even against an adaptive adversary. This is achieved by first developing a fully dynamic bicriteria approximation algorithm with (near-)optimal worst-case bounds, and then designing a suitable fully dynamic -center algorithm with near-linear update time. For the fully dynamic bicriteria approximation algorithm, we establish the worst-case recourse and worst-case update time guarantees separately, and then merge them into a single algorithm through a simple yet elegant process.

Paper Structure

This paper contains 74 sections, 56 theorems, 37 equations, 4 figures, 8 algorithms.

Key Result

lemma 1.1

There is a randomized fully dynamic algorithm against an adaptive adversary that, given a point set $P$ in a metric space subject to point updates and an integer $k \geq 1$, maintains a subset of points $S \subseteq P$ such that: $\blacktriangleleft$$\blacktriangleleft$

Figures (4)

  • Figure 1: $\hat{U}_i^{(\ell)}$ is defined at time $\tau_i^{(\ell)}$ and $U_i^{(\ell)}$ is defined at the beginning of the $\ell$-th transition phase. The set $\operatorname{Lazy}(\hat{U}_i^{(\ell)})$ is depicted at some moment during the $\ell$-th transition phase.
  • Figure 2: Illustration of successive moments in time. Superscript $^+$ indicates that the algorithm begins, $^-$ indicates that the algorithm reports completion, and $^{-+}$ indicates that the algorithm reports completion and then restarts. We just use $^-$ or $^+$ in place of $^{-+}$ whenever additional details are unnecessary for understanding the example provided. Left figure: We have $j = 4, \zeta = 2, i = 3$. Just before time $\tau_3$, the global $4$-th ball uses $\hat{B}_4^2$ constructed within $\hat{U}_4^2 = U_4^\text{(old)}$ defined at time $\tau_4^2$. Subsequently, $\mathcal{BD}_3$ reports completion at time $\tau_3$, replacing $\hat{B}_4^2$ with $\hat{B}_4^3$. Right figure: We have $j = 4, \zeta = 3, i = 2$. Just before time $\tau_2$, the global $4$-th ball uses $\hat{B}_4^3$ constructed within $\hat{U}_4^3 = U_4^\text{(old)}$ defined at time $\tau_4^3$. Subsequently, $\mathcal{BD}_2$ reports completion at time $\tau_2$, replacing $\hat{B}_4^3$ with $\hat{B}_4^2$.
  • Figure 3: Illustration of successive moments in time. Superscript $^+$ indicates that the algorithm begins, $^-$ indicates that the algorithm reports completion, and $^{-+}$ indicates that the algorithm reports completion and then restarts. We just use $^-$ or $^+$ in place of $^{-+}$ whenever additional details are unnecessary for understanding the example provided. We have $\zeta = 2$ and $\xi = j = 5$. The execution set $\hat{U}^2_4 = U_4^\text{(old)}$ is defined at time $\tau_4^2$. At time $\tau_2 = \tau^\text{(old)}$, $\mathcal{BD}_2$ reports completion and $\mathcal{BD}_4$ is restarted with $\operatorname{Lazy}(U_4^\text{(old)}) = \operatorname{Lazy}(\hat{U}^2_4)$ as input. $\mathcal{BD}_5$ reports completion at the current time $\tau_5$, and $\hat{U}_5^5$ was constructed when $\mathcal{BD}_5$ was previously restarted. The "duration of $\mathcal{BD}_4$" is upper bounded by $\epsilon \space \lvert \operatorname{Lazy}(U_4^\text{(old)}) \rvert \leq \epsilon \space (1+\epsilon) |U_4^\text{(old)}|$ according to \ref{['clm:U_j_prev_U_j']}.
  • Figure 4: Illustration of successive moments in time. Superscript $^{-}$ indicates that the "subalgorithm" of $\mathcal{UPD}$ reports completion. The value of $i$ is the level such that the $i$-th algorithm $\mathcal{BD}_i$ has reported completion most recently with $i \in [0, j]$. At time $\tau_j^{(\ell)}$, we have $\hat{U}^{(\ell)}_j = \hat{U}^i_j$.

Theorems & Definitions (114)

  • lemma 1.1: dynamic MP-bi algorithm bhattacharya2023fullybhattacharya2025alm_opt_kcenter
  • theorem 1.2: bhattacharya2025alm_opt_kcenter
  • theorem 1.3
  • theorem 1.4
  • theorem 1.5
  • Definition 3.1: $k$-center clustering problem
  • definition 3.2: $(\alpha, \beta)$-bicriteria approximation
  • theorem 4.1
  • definition 4.2: non-rebuilt set
  • proof
  • ...and 104 more