Table of Contents
Fetching ...

Maximizing Qubit Throughput under Buffer Decoherence and Variability in Generation

Padma Priyanka, Avhishek Chatterjee, Sheetal Kalyani

Abstract

Quantum communication networks require transmission of high-fidelity, uncoded qubits for applications such as entanglement distribution and quantum key distribution. However, current implementations are constrained by limited buffer capacity and qubit decoherence, which degrades qubit quality while waiting in the buffer. A key challenge arises from the stochastic nature of qubit generation, there exists a random delay (D) between the initiation of a generation request and the availability of the qubit. This induces a fundamental trade off early initiation increases buffer waiting time and hence decoherence, whereas delayed initiation leads to server idling and reduced throughput. We model this system as an admission control problem in a finite buffer queue, where the reward associated with each job is a decreasing function of its sojourn time. We derive analytical conditions under which a simple "no lag" policy where a new qubit is generated immediately upon the availability of buffer space is optimal. To address scenarios with unknown system parameters, we further develop a Bayesian learning framework that adaptively optimizes the admission policy. In addition to quantum communication systems, the proposed model is applicable to delay sensitive IoT sensing and service systems.

Maximizing Qubit Throughput under Buffer Decoherence and Variability in Generation

Abstract

Quantum communication networks require transmission of high-fidelity, uncoded qubits for applications such as entanglement distribution and quantum key distribution. However, current implementations are constrained by limited buffer capacity and qubit decoherence, which degrades qubit quality while waiting in the buffer. A key challenge arises from the stochastic nature of qubit generation, there exists a random delay (D) between the initiation of a generation request and the availability of the qubit. This induces a fundamental trade off early initiation increases buffer waiting time and hence decoherence, whereas delayed initiation leads to server idling and reduced throughput. We model this system as an admission control problem in a finite buffer queue, where the reward associated with each job is a decreasing function of its sojourn time. We derive analytical conditions under which a simple "no lag" policy where a new qubit is generated immediately upon the availability of buffer space is optimal. To address scenarios with unknown system parameters, we further develop a Bayesian learning framework that adaptively optimizes the admission policy. In addition to quantum communication systems, the proposed model is applicable to delay sensitive IoT sensing and service systems.

Paper Structure

This paper contains 15 sections, 5 theorems, 35 equations, 5 figures, 2 tables, 1 algorithm.

Key Result

Theorem 1

The general reward function $G$ associated with the proposed queueing system, characterized by general service and a general delay distribution, with a general function $f$ and a deterministically chosen lag, attains its global maximum at $\Delta = 0$, if the following conditions holds:

Figures (5)

  • Figure 1: Calculation of inter-arrival time
  • Figure 2: 2D scatter plot of the reward function versus service and delay time parameters. (a) Exponential service and exponential delay distribution. (b) Uniform service and uniform delay distribution
  • Figure 3: Comparison of Grid search estimated reward and the theoretical surrogate reward for different service and delay distributions with $t_s =1, t_d = 0.33$
  • Figure 4: Comparision of Bayesian estimated reward and theoretical surrogate reward for different service and delay distributions with $t_s =1, t_d = 0.33$
  • Figure 5: Mean-shift comparison of Bayesian estimated reward and the theoretical surrogate reward for exponential service and exponential delay distributions. (a) Gradual change in mean of service and delay distribution. (b) Abrupt change in mean of service and delay distribution.

Theorems & Definitions (10)

  • Theorem 1
  • proof
  • Lemma 1
  • proof
  • Corollary 1
  • proof
  • Corollary 2
  • proof
  • Theorem 2
  • proof