Table of Contents
Fetching ...

Robust Principal Component Completion

Yinjian Wang, Wei Li, Yuanyuan Gui, James E. Fowler, Gemine Vivone

Abstract

Robust principal component analysis (RPCA) seeks a low-rank component and a sparse component from their summation. Yet, in many applications of interest, the sparse foreground actually replaces, or occludes, elements from the low-rank background. To address this mismatch, a new framework is proposed in which the sparse component is identified indirectly through determining its support. This approach, called robust principal component completion (RPCC), is solved via variational Bayesian inference applied to a fully probabilistic Bayesian sparse tensor factorization. Convergence to a hard classifier for the support is shown, thereby eliminating the post-hoc thresholding required of most prior RPCA-driven approaches. Experimental results reveal that the proposed approach delivers near-optimal estimates on synthetic data as well as robust foreground-extraction and anomaly-detection performance on real color video and hyperspectral datasets, respectively. Source implementation and Appendices are available at https://github.com/WongYinJ/BCP-RPCC.

Robust Principal Component Completion

Abstract

Robust principal component analysis (RPCA) seeks a low-rank component and a sparse component from their summation. Yet, in many applications of interest, the sparse foreground actually replaces, or occludes, elements from the low-rank background. To address this mismatch, a new framework is proposed in which the sparse component is identified indirectly through determining its support. This approach, called robust principal component completion (RPCC), is solved via variational Bayesian inference applied to a fully probabilistic Bayesian sparse tensor factorization. Convergence to a hard classifier for the support is shown, thereby eliminating the post-hoc thresholding required of most prior RPCA-driven approaches. Experimental results reveal that the proposed approach delivers near-optimal estimates on synthetic data as well as robust foreground-extraction and anomaly-detection performance on real color video and hyperspectral datasets, respectively. Source implementation and Appendices are available at https://github.com/WongYinJ/BCP-RPCC.

Paper Structure

This paper contains 24 sections, 1 theorem, 52 equations, 14 figures, 8 tables, 1 algorithm.

Key Result

Proposition 5.1

Suppose that $\widehat{\mathcal{Y}}$ is bounded and let $\xi_0$ denote the stationary point of $\mathscr{L}_{k}(\xi)$. Then $\lim_{\sigma\rightarrow0}\xi_0(1-\xi_0)=0.$

Figures (14)

  • Figure 1: Visual comparison of various quantities of the RPCA formulation for an image.
  • Figure 2: B-Unfolding: The reorganization of each tensor block into a single column of a matrix.
  • Figure 3: Box plots of RRSE and IoU on synthetic data. RRSE is standardized via Z-score transformation. The statistics in blue in the upper area are the original mean (mn), standard deviation (std) and median (md) of RRSE for each group of 20 runs.
  • Figure 4: Example frames, along with ground-truth foreground masks and ROIs, from the four CDnet videos used for the foreground-extraction experiments.
  • Figure 5: Hyperparameter tuning for foreground extraction using $\operatorname{F1}$ and $\operatorname{IoU}$. Row 1: Tuning $\sigma$ when $R=25$. Row 2: Tuning $R$ when $\sigma=10^{-3}$.
  • ...and 9 more figures

Theorems & Definitions (7)

  • Definition 3.1: Blockwise Unfolding (B-unfolding) WLG2025
  • Definition 3.2: Blockwise Support
  • Definition 3.3: Blockwise Projector
  • Definition 5.1: RPCC
  • Remark 5.1
  • Proposition 5.1
  • proof