Table of Contents
Fetching ...

Q-Bridge: Code Translation for Quantum Machine Learning via LLMs

Runjia Zeng, Priyabrata Senapati, Ruixiang Tang, Dongfang Liu, Qiang Guan

Abstract

Large language models have recently shown potential in bridging the gap between classical machine learning and quantum machine learning. However, the lack of standardized, high-quality datasets and robust translation frameworks limits progress in this domain. We introduce Q-Bridge, an LLM-guided code translation framework that systematically converts CML implementations into executable QML variants. Our approach builds on a self-involving pipeline that iteratively expands a verified seed codebase into a large-scale dataset, CML-2-QML, integrating verifiable and unverifiable code pairs. The Q-Bridge model is fine-tuned using supervised LoRA adaptation for scalable and memory-efficient training, achieving faithful and interpretable quantum code generation across diverse architectures. Empirical analysis confirms the feasibility of direct CML-to-QML translation and reveals consistent structural alignment between classical and quantum paradigms. Case studies further demonstrate that Q-Bridge can maintain deterministic correctness and also enable creative architectural exploration. This work establishes the first reproducible framework and dataset for LLM-driven quantum code translation, offering a foundation for scalable quantum AI development.

Q-Bridge: Code Translation for Quantum Machine Learning via LLMs

Abstract

Large language models have recently shown potential in bridging the gap between classical machine learning and quantum machine learning. However, the lack of standardized, high-quality datasets and robust translation frameworks limits progress in this domain. We introduce Q-Bridge, an LLM-guided code translation framework that systematically converts CML implementations into executable QML variants. Our approach builds on a self-involving pipeline that iteratively expands a verified seed codebase into a large-scale dataset, CML-2-QML, integrating verifiable and unverifiable code pairs. The Q-Bridge model is fine-tuned using supervised LoRA adaptation for scalable and memory-efficient training, achieving faithful and interpretable quantum code generation across diverse architectures. Empirical analysis confirms the feasibility of direct CML-to-QML translation and reveals consistent structural alignment between classical and quantum paradigms. Case studies further demonstrate that Q-Bridge can maintain deterministic correctness and also enable creative architectural exploration. This work establishes the first reproducible framework and dataset for LLM-driven quantum code translation, offering a foundation for scalable quantum AI development.

Paper Structure

This paper contains 26 sections, 4 equations, 9 figures, 6 tables.

Figures (9)

  • Figure 1: Classical to quantum translation pipeline that includes (a) conventional ML code is provided as an input, (b) a LLM interprets the classical ML code and synthesizes an equivalent quantum ML specification, (c) the generated design is instantiated as a parameterized quantum circuit encoder plus variational blocks with single qubit rotations, ZZ entanglers, and measurements ansatz senapati2024pqml.
  • Figure 2: The building of Q-Bridge. It begins with ❶ establishing a robust seed codebase to form a reliable high-quality prototype, followed by ❷ employing LLMs to expand and refine the dataset, and finally ❸ training LLMs on the CML-2-QML dataset to obtain Q-Bridge model. Finally, we outline potential future directions of our work.
  • Figure 3: The pipeline for scaling from the seed codebase.
  • Figure 4: Distribution of scaled code lengths for classical and quantum codes.
  • Figure 5: Comparison between seed code and its scaled ML version.
  • ...and 4 more figures