Table of Contents
Fetching ...

Efficient Encrypted Computation in Convolutional Spiking Neural Networks with TFHE

Longfei Guo, Pengbo Li, Ting Gao, Yonghai Zhong, Haojie Fan, Jinqiao Duan

Abstract

With the rapid advancement of AI technology, we have seen more and more concerns on data privacy, leading to some cutting-edge research on machine learning with encrypted computation. Fully Homomorphic Encryption (FHE) is a crucial technology for privacy-preserving computation, while it struggles with continuous non-polynomial functions, as it operates on discrete integers and supports only addition and multiplication. Spiking Neural Networks (SNNs), which use discrete spike signals, naturally complement FHE's characteristics. In this paper, we introduce FHE-DiCSNN, a framework built on the TFHE scheme, utilizing the discrete nature of SNNs for secure and efficient computations. By leveraging bootstrapping techniques, we successfully implement Leaky Integrate-and-Fire (LIF) neuron models on ciphertexts, allowing SNNs of arbitrary depth. Our framework is adaptable to other spiking neuron models, offering a novel approach to homomorphic evaluation of SNNs. Additionally, we integrate convolutional methods inspired by CNNs to enhance accuracy and reduce the simulation time associated with random encoding. Parallel computation techniques further accelerate bootstrapping operations. Experimental results on the MNIST and FashionMNIST datasets validate the effectiveness of FHE-DiCSNN, with a loss of less than 3\% compared to plaintext, respectively, and computation times of under 1 second per prediction. We also apply the model into real medical image classification problems and analyze the parameter optimization and selection.

Efficient Encrypted Computation in Convolutional Spiking Neural Networks with TFHE

Abstract

With the rapid advancement of AI technology, we have seen more and more concerns on data privacy, leading to some cutting-edge research on machine learning with encrypted computation. Fully Homomorphic Encryption (FHE) is a crucial technology for privacy-preserving computation, while it struggles with continuous non-polynomial functions, as it operates on discrete integers and supports only addition and multiplication. Spiking Neural Networks (SNNs), which use discrete spike signals, naturally complement FHE's characteristics. In this paper, we introduce FHE-DiCSNN, a framework built on the TFHE scheme, utilizing the discrete nature of SNNs for secure and efficient computations. By leveraging bootstrapping techniques, we successfully implement Leaky Integrate-and-Fire (LIF) neuron models on ciphertexts, allowing SNNs of arbitrary depth. Our framework is adaptable to other spiking neuron models, offering a novel approach to homomorphic evaluation of SNNs. Additionally, we integrate convolutional methods inspired by CNNs to enhance accuracy and reduce the simulation time associated with random encoding. Parallel computation techniques further accelerate bootstrapping operations. Experimental results on the MNIST and FashionMNIST datasets validate the effectiveness of FHE-DiCSNN, with a loss of less than 3\% compared to plaintext, respectively, and computation times of under 1 second per prediction. We also apply the model into real medical image classification problems and analyze the parameter optimization and selection.

Paper Structure

This paper contains 20 sections, 2 theorems, 20 equations, 1 figure, 7 tables.

Key Result

theorem 1

Under the given conditions $V_{reset} = 0$ and $V_{th} = 1$, Eq.(LIF_equation) can be discretized into the following equivalent form with a discretization scaling factor $\theta$: Here, the hat symbol represents the discretized values, and $\hat{V}_{\text{th}}^{\tau} = \theta \cdot \tau$. Moreover $\hat{I}[t]=\sum \hat{\omega}_{j} S_j[t].$

Figures (1)

  • Figure 1: Schematic visualisation of the CSNN network. For the network, we use a convolutional layer with kernal_size = 3, stridge = 1,padding=1, and an average pooling layer with kernal_size = 2, stridge = 2, padding=0.

Theorems & Definitions (4)

  • theorem 1
  • proof
  • theorem 2
  • proof