Importance sampling for data-driven decoding of quantum error-correcting codes
Authors
Evan Peters
Abstract
Data-driven decoding (DDD) - learning to decode syndromes of (quantum) error-correcting codes by learning from data - can be a difficult problem due to several atypical and poorly understood properties of the training data. We introduce a theory of example importance that clarifies these unusual aspects of DDD: For instance, we show that DDD of a simple error-correcting code is equivalent to a noisy, imbalanced binary classification problem. We show that an existing importance sampling technique of training neural decoders on data generated with higher error rates introduces a tradeoff between class imbalance and label noise. We apply this technique to show robust improvements in the accuracy of neural network decoders trained on syndromes sampled at higher error rates, and provide heuristic arguments for finding an optimal error rate for training data. We extend these analyses to decoding quantum codes involving multiple rounds of syndrome measurements, suggesting broad applicability of both example importance and turning the knob for improving experimentally relevant data-driven decoders.