I read the whitepaper (Programmable Bootstrapping Enables Efficient Homomorphic Inference of Deep Neural Networks) on programmable bootstrapping and deep neural network inference. I feel very impressed and excited by the ability of TFHE for fast bootstrapping.
I’d like to know some details about the experiments you described in this paper. Particularly:
How is the input MNIST image encoded? From my understanding, each pixel of a single image is encoded and encrypted into a LWE ciphertext. Am I right?
What is the architecture of the NN-20, NN-50 and NN-100, and how many PBS are there in them?
I’m asking because it is reported in another paper ( CONCRETE: Concrete Operates oN Ciphertexts Rapidly by Extending TfhE ) that a PBS takes about 18 ms (N=1024, n=630, 64-bit) on CPU. The NN-20, NN-50, NN-100 inference time is < 60s. Then it seems there are very few PBS in them?
Are there any parallel computing techniques compatible with the Concrete library used in the experiments so that it can utilize the multicore CPUs?
Hello, can I get more detail information for the TFHE parameter and the NN architecture?
For TFHE parameter, what is the bootstrapping level and keyswitching level used in this experiment?
For NN architecture, what is the convolution window size that you use? I still don’t understand how you can get 840 PBS for the first layer. With the original image size is 28x28, if you want to do convolution with resulting in same output size, you will only need to perform 784 PBS
Also, did you always do the keyswitching operation before PBS in this experiments?