Table of Contents
Fetching ...

Eliminating Vendor Lock-In in Quantum Machine Learning via Framework-Agnostic Neural Networks

Poornima Kumaresan, Shwetha Singaravelu, Lakshmi Rajendran, Santhosh Sivasubramani

Abstract

Quantum machine learning (QML) stands at the intersection of quantum computing and artificial intelligence, offering the potential to solve problems that remain intractable for classical methods. However, the current landscape of QML software frameworks suffers from severe fragmentation: models developed in TensorFlow Quantum cannot execute on PennyLane backends, circuits authored in Qiskit Machine Learning cannot be deployed to Amazon Braket hardware, and researchers who invest in one ecosystem face prohibitive switching costs when migrating to another. This vendor lock-in impedes reproducibility, limits hardware access, and slows the pace of scientific discovery. In this paper, we present a framework-agnostic quantum neural network (QNN) architecture that abstracts away vendor-specific interfaces through a unified computational graph, a hardware abstraction layer (HAL), and a multi-framework export pipeline. The core architecture supports simultaneous integration with TensorFlow, PyTorch, and JAX as classical co-processors, while the HAL provides transparent access to IBM Quantum, Amazon Braket, Azure Quantum, IonQ, and Rigetti backends through a single application programming interface (API). We introduce three pluggable data encoding strategies (amplitude, angle, and instantaneous quantum polynomial encoding) that are compatible with all supported backends. An export module leveraging Open Neural Network Exchange (ONNX) metadata enables lossless circuit translation across Qiskit, Cirq, PennyLane, and Braket representations. We benchmark our framework on the Iris, Wine, and MNIST-4 classification tasks, demonstrating training time parity (within 8\% overhead) compared to native framework implementations, while achieving identical classification accuracy.

Eliminating Vendor Lock-In in Quantum Machine Learning via Framework-Agnostic Neural Networks

Abstract

Quantum machine learning (QML) stands at the intersection of quantum computing and artificial intelligence, offering the potential to solve problems that remain intractable for classical methods. However, the current landscape of QML software frameworks suffers from severe fragmentation: models developed in TensorFlow Quantum cannot execute on PennyLane backends, circuits authored in Qiskit Machine Learning cannot be deployed to Amazon Braket hardware, and researchers who invest in one ecosystem face prohibitive switching costs when migrating to another. This vendor lock-in impedes reproducibility, limits hardware access, and slows the pace of scientific discovery. In this paper, we present a framework-agnostic quantum neural network (QNN) architecture that abstracts away vendor-specific interfaces through a unified computational graph, a hardware abstraction layer (HAL), and a multi-framework export pipeline. The core architecture supports simultaneous integration with TensorFlow, PyTorch, and JAX as classical co-processors, while the HAL provides transparent access to IBM Quantum, Amazon Braket, Azure Quantum, IonQ, and Rigetti backends through a single application programming interface (API). We introduce three pluggable data encoding strategies (amplitude, angle, and instantaneous quantum polynomial encoding) that are compatible with all supported backends. An export module leveraging Open Neural Network Exchange (ONNX) metadata enables lossless circuit translation across Qiskit, Cirq, PennyLane, and Braket representations. We benchmark our framework on the Iris, Wine, and MNIST-4 classification tasks, demonstrating training time parity (within 8\% overhead) compared to native framework implementations, while achieving identical classification accuracy.

Paper Structure

This paper contains 45 sections, 14 equations, 6 figures, 10 tables, 1 algorithm.

Figures (6)

  • Figure 1: Top-level architecture of the framework-agnostic QNN. The QNN Core maintains a vendor-independent circuit DAG. Framework adapters (TensorFlow, PyTorch, JAX) provide native integration with classical optimizers and autograd engines. The hardware abstraction layer (HAL) dispatches circuits to multiple quantum backends. The export module enables lossless circuit translation via ONNX metadata.
  • Figure 2: Hardware compatibility matrix showing supported backend-framework combinations. Each cell indicates whether a given quantum hardware backend is accessible through a given classical framework adapter. Dark cells indicate native support; light cells indicate support through the HAL's transpilation engine.
  • Figure 3: Circuit diagrams for the three data encoding strategies. Left: amplitude encoding using multiplexed $R_y$ rotations. Centre: angle encoding with one $R_y$ gate per qubit. Right: IQP encoding with Hadamard, phase, and CZ gates (one repetition shown).
  • Figure 4: Multi-framework export pipeline. A trained QNN model is exported from the internal DAG representation to Qiskit, Cirq, PennyLane, or Braket formats. ONNX metadata preserves training history, optimizer state, and encoding configuration.
  • Figure 5: Training loss convergence across frameworks for three classification tasks. The curves for Qiskit, Cirq, and PennyLane overlap closely, confirming that our framework adapters produce equivalent optimization trajectories regardless of the underlying backend. All frameworks converge within 100 epochs.
  • ...and 1 more figures