KANQAS: Kolmogorov-Arnold Network for Quantum Architecture Search
Akash Kundu, Aritra Sarkar, Abhishek Sadhu
TL;DR
KANQAS reframes quantum architecture search by substituting a Kolmogorov-Arnold Network for the traditional MLP in a DDQN-based reinforcement learning loop. Across quantum state preparation and quantum chemistry tasks, KAQN achieves higher or comparable performance with drastically fewer trainable parameters, and shows robustness to hardware-like noise, albeit with longer episode runtimes. The work demonstrates more compact parameterized quantum circuits for molecular ground-state problems and highlights the interpretability and efficiency gains offered by KAN in quantum AI design. Together, these findings suggest KAN is a promising path toward practical, hardware-aware quantum architecture search and design.
Abstract
Quantum architecture Search (QAS) is a promising direction for optimization and automated design of quantum circuits towards quantum advantage. Recent techniques in QAS emphasize Multi-Layer Perceptron (MLP)-based deep Q-networks. However, their interpretability remains challenging due to the large number of learnable parameters and the complexities involved in selecting appropriate activation functions. In this work, to overcome these challenges, we utilize the Kolmogorov-Arnold Network (KAN) in the QAS algorithm, analyzing their efficiency in the task of quantum state preparation and quantum chemistry. In quantum state preparation, our results show that in a noiseless scenario, the probability of success is 2 to 5 times higher than MLPs. In noisy environments, KAN outperforms MLPs in fidelity when approximating these states, showcasing its robustness against noise. In tackling quantum chemistry problems, we enhance the recently proposed QAS algorithm by integrating curriculum reinforcement learning with a KAN structure. This facilitates a more efficient design of parameterized quantum circuits by reducing the number of required 2-qubit gates and circuit depth. Further investigation reveals that KAN requires a significantly smaller number of learnable parameters compared to MLPs; however, the average time of executing each episode for KAN is higher.
