Understanding secret keys in server.zip

Hello, I’m trying to understand how the secret keys are used.
In a previous post I was looking for some information and I was told to download and unzip the server.zip file and look at client.specs.json. Here, I found the the key set dict with several secret keys, keyswitch keys, and bootstrapping keys.
I thought that we switch from a secret key to another only during the programmable bootstrapping, but there were more secret keys than I thought were needed.
For example, I encrypted a portion of layers at the bottom of a VGG model which was made of:
QuantReLU, QuantConv2d, Avgpool2d, QuantIdentity, QuantIdentity, Flatten, QuantIdentity, QuanLinear
(I understant there is an unneeded QuantIdentity, but let’s ignore that)
Since there is only one QuantReLU, I thought we needed only 2 secret keys max, but these are the 5 secret keys I found:

"lweSecretKeys":[
   {
     "id":0,
     "params":{
        "lweDimension":8192,
        "integerPrecision":64,
        "keyType":"binary"
      }
   },
   {
      "id":1,
      "params":{
         "lweDimension":903,
         "integerPrecision":64,
         "keyType":"binary"
       }
   },
   {
     "id":2,
     "params":{
        "lweDimension":593,
        "integerPrecision":64,
        "keyType":"binary"
      }
   },
   {
     "id":3,
     "params":{
        "lweDimension":32768,
        "integerPrecision":64,
        "keyType":"binary"
      }
   },
   {
     "id":4,
     "params":{
        "lweDimension":926,
        "integerPrecision":64,
        "keyType":"binary"
       }
   }
]

Where exactly do we use these secret keys?

Hello @michela_polito,

Concrete compiler optimize the whole circuit and can use different bootstrapping keys for different part of you programs and switch between what we call partitions. Basically you can see partition as a set of programmable bootstraping that share the same noise constraints, and as you said for doing a pbs you need 2 keys so you probably have several partition into your circuit.

Here my intuition is that you have 3 partitions and 1 secret key that is shared between two partitions (we share secret keys with the same parameters). But that hard to say with just the dump you share, for more information you can use the compiler_verbose_mode=True options that will print every transformation into the pipeline and at the TFHE dialect will show you more precisely the optimizer choice applied on you computation dag.

I hope that help you,
Cheers.

1 Like

Hi, thanks for your response!

So, the model is divided into partitions. In other words, the layers are divided in groups, each having a specific secret key, and Concrete switches between them during inference.
Did I get it right?

I used the compiler_verbose_mode=True option you suggested, it’s pretty useful, thanks!
This is one of the result I obtained for the previous example I told you above, e.g. the following last layers of a VGG11:
QuantReLU, QuantConv2d, Avgpool2d, QuantIdentity, QuantIdentity, Flatten, QuantIdentity, QuanLinear

0 <- Input { out_precision: 8, out_shape: Shape { dimensions_size: [1, 512, 7, 7] } }
1 <- Dot { inputs: [OperatorIndex(0)], weights: ClearTensor { shape: Shape { dimensions_size: [] }, values: [1] }, kind: Broadcast { shape: Shape { dimensions_size: [1, 512, 7, 7] } } }

#RELU:
2 <- Lut { input: OperatorIndex(1), table: FunctionTable { values: [] }, out_precision: 10 }

#AVGPOOL:
3 <- LevelledOp { inputs: [OperatorIndex(2)], complexity: LevelledComplexity { lwe_dim_cost_factor: 0.0, fixed_cost: 0.0 }, weights: [7.0710678118654755], out_shape: Shape { dimensions_size: [1, 512, 1, 1] }, comment: "FHELinalg.conv2d @/1/AveragePool.avgpool | /home/saraceno/splitInf/mlsenv/lib/python3.11/site-packages/concrete/ml/quantization/quantized_ops.py:1157:0" }
4 <- Round { input: OperatorIndex(3), out_precision: 8 }
5 <- Lut { input: OperatorIndex(4), table: FunctionTable { values: [] }, out_precision: 5 }

#FLATTEN:
6 <- LevelledOp { inputs: [OperatorIndex(5)], complexity: LevelledComplexity { lwe_dim_cost_factor: 0.0, fixed_cost: 0.0 }, weights: [1.0], out_shape: Shape { dimensions_size: [1, 512] }, comment: "tensor.collapse_shape /home/saraceno/splitInf/mlsenv/lib/python3.11/site-packages/concrete/ml/quantization/quantized_ops.py:1951:0" }

7 <- Lut { input: OperatorIndex(6), table: FunctionTable { values: [] }, out_precision: 11 }

#LINEAR:
8 <- LevelledOp { inputs: [OperatorIndex(7)], complexity: LevelledComplexity { lwe_dim_cost_factor: 0.0, fixed_cost: 0.0 }, weights: [1.0], out_shape: Shape { dimensions_size: [1, 512] }, comment: "FHELinalg.to_signed @/6/Gemm.matmul | /home/saraceno/splitInf/mlsenv/lib/python3.11/site-packages/concrete/ml/quantization/quantized_ops.py:381:0" }

%instr5
9 <- LevelledOp { inputs: [OperatorIndex(8)], complexity: LevelledComplexity { lwe_dim_cost_factor: 0.0, fixed_cost: 0.0 }, weights: [157.75931034332015], out_shape: Shape { dimensions_size: [1, 10] }, comment: "FHELinalg.matmul_eint_int @/6/Gemm.matmul | /home/saraceno/splitInf/mlsenv/lib/python3.11/site-packages/concrete/ml/quantization/quantized_ops.py:381:0" }

I hope I pasted the right part.
Anyway, I tried to identify the instructions corresponding to each layer, let me know if I made a mistake.
Do we move from partition to partition using the LUT each time?

I think the partitions are:

  1. Up to ReLU (point 2)
  2. From point 2 to point 7 after the Flatten
  3. The Linear layer.

Please, correct me if I’m wrong.

The partitions (aka “Atomic Patterns”) are always LevelledOp - Relu. In some cases several Atomic Patterns might have the same cryptographic parameters.

Your operation → layer printout looks correct… though Lut 7 looks weird, it could be an issue with Concrete ML → Flatten shouldn’t need a Lut.

Okay, that’s clear, thanks!

About the Lut 7, after some research, I think that MAYBE it could be due to the AvgPool2d?
I don’t know how Concrete ML handles this type of layer, but I read that usually the sum is computed and then the division is delayed to the next PBS. In this case, there is no ReLU left after the AvgPool2d, so maybe it’s that…?