Table of Contents
Fetching ...

Simulated Bifurcation Quantum Annealing

Jakub Pawłowski, Paweł Tarasiuk, Jan Tuziemski, Łukasz Pawela, Bartłomiej Gardas

Abstract

We introduce Simulated Bifurcation Quantum Annealing (SBQA), a quantum-inspired optimization algorithm that extends simulated bifurcation by incorporating inter-replica interactions to mimic quantum tunneling. SBQA retains the efficiency and parallelism of simulated bifurcation while improving performance on sparse and rugged energy landscapes. We derive its equations of motion, analyze parameter dependence, and propose a lightweight auto-tuning strategy. A comprehensive benchmarking study on both large-scale problems and smaller instances relevant for current quantum hardware shows that SBQA systematically improves on SBM in the sparse and rugged regimes where SBM is known to struggle, while remaining competitive and versatile across a diverse set of tested problem families. These results position SBQA as a practical quantum-inspired optimization heuristic and a stronger classical baseline for the sparse and rugged regimes studied here.

Simulated Bifurcation Quantum Annealing

Abstract

We introduce Simulated Bifurcation Quantum Annealing (SBQA), a quantum-inspired optimization algorithm that extends simulated bifurcation by incorporating inter-replica interactions to mimic quantum tunneling. SBQA retains the efficiency and parallelism of simulated bifurcation while improving performance on sparse and rugged energy landscapes. We derive its equations of motion, analyze parameter dependence, and propose a lightweight auto-tuning strategy. A comprehensive benchmarking study on both large-scale problems and smaller instances relevant for current quantum hardware shows that SBQA systematically improves on SBM in the sparse and rugged regimes where SBM is known to struggle, while remaining competitive and versatile across a diverse set of tested problem families. These results position SBQA as a practical quantum-inspired optimization heuristic and a stronger classical baseline for the sparse and rugged regimes studied here.

Paper Structure

This paper contains 16 sections, 19 equations, 16 figures, 1 table.

Figures (16)

  • Figure 1: Heatmaps of optimality gaps $g$ in the $\alpha, \beta$ plane, demonstrating the sensitivity of the SBQA algorithm. Panels (a)--(f) correspond to different instances; see main text for details. Plotted values of $g$ are averaged over best energies from 10 independent runs for each instance, with $8$ sets of $128$ interacting replicas per run. Red contour lines correspond to a $35\%$ threshold relative to the best value obtained. The results are consistent with interpreting $\beta$ as the inverse temperature, since energies decrease with increasing $\beta$. They suggest selecting $\beta$ from the range $0.5 \lesssim \beta \lesssim 1.5$. The impact of $\alpha$ is less pronounced, but still visible, especially for the all-to-all instance in panel (a). We therefore restrict its range to $0.5 \lesssim \alpha \lesssim 1.0$. Altogether, this analysis suggests that the SBQA algorithm is stable with respect to the choice of hyperparameters, and no complicated, instance-dependent tuning is required.
  • Figure 2: Average optimality gap $g$ as a function of the number of steps for instances on (a) the $Z_7$ graph and (b) the $Z_{150}$ graph, with $N=1680$ and $N=722400$ variables, respectively. Results are averaged over $20$ ($5$) random instances in the $Z_7$ ($Z_{150}$) case and $10$ independent runs per instance; error bars show one standard deviation. Insets in panels (a) and (b) show the measured runtime as a function of the number of steps. Panel (c) shows the difference $\Delta g$ between the optimality gaps obtained by SBM and SBQA as a function of instance size, with negative values favoring SBQA. The advantage of SBQA becomes visible beyond the $Z_{12}$ graph and grows as the problems become larger and sparser.
  • Figure 3: Scaling of time-to-epsilon with problem size for (a) $\varepsilon=0.3\times 10^{-3}$ and (b) $\varepsilon=10^{-3}$, in the range between $Z_{20}$ and $Z_{150}$ graphs. Data points show a linear trend in the log-log scale, indicating the expected power-law scaling, with the exponent $\gamma(\varepsilon)$ extracted from the fit. Panel (c) shows the dependence of the scaling exponent $\gamma$ on the target optimality gap $\varepsilon$, with SBQA consistently exhibiting a smaller exponent, and thus better scaling than SBM.
  • Figure 4: Average optimality gap $g$ as a function of the number of steps for Sidon28 instances defined on the QAC logical graph with size parameter (a) $L=20$ and (b) $L=80$. Results are averaged over an ensemble of $10$ random instances for each size and $10$ independent runs per instance; error bars show one standard deviation. Insets in panels (a) and (b) show the measured runtime as a function of the number of steps. Panel (c) shows the difference $\Delta g$ between the optimality gaps obtained by SBM and SBQA at fixed $N_s=10^5$, with negative values favoring SBQA. For a sufficiently large number of steps, the advantage of SBQA becomes significant across all studied sizes.
  • Figure 5: Scaling of time-to-epsilon with problem size for (a) $\varepsilon=2\times 10^{-3}$ and (b) $\varepsilon=3 \times 10^{-3}$, in the range between $L=20$ and $L=80$ QAC instances. Bottom panel (c) shows the exponent $\gamma(\varepsilon)$ extracted from the power-law fit. Both solvers perform comparably for the optimality gap range $0.23\% \lesssim \varepsilon \lesssim 0.3\%$, but for stricter target gaps SBQA starts to show a pronounced advantage.
  • ...and 11 more figures