Understanding Concrete

inputs’: [{‘encryption’: {‘encoding’: {‘crt’: [7, 8, 9, 11],
‘isSigned’: False,
‘precision’: 11},
‘secretKeyID’: 0,
‘variance’: 9.940977002694397e-32},
‘shape’: {‘dimensions’: [1, 1, 8, 8],
‘sign’: False,
‘size’: 64,
width’: 11}}]
outputs’: [{‘encryption’: {‘encoding’: {‘crt’: [7, 8, 9, 11],
‘isSigned’: True,
‘precision’: 12},
‘secretKeyID’: 0,
‘variance’: 9.940977002694397e-32},
‘shape’: {‘dimensions’: [1, 10], ‘sign’: True, ‘size’: 10, 'width’: 12}}

what is this width?

the width is the number of bits used in the ciphertext to represent the plaintext message

1 Like

thanks @ayoub I found this
const uint64_t SMALL_KEY = 1;
const uint64_t BIG_KEY = 0;
what is this small key and big key and what is the use of it?
Please correct me here if wrong , encrypting arguments with JIt compilation config is done with client library (client.encrypt) and for compilation using librarysupport we use (ClientSupport.encrypt_arguments)

Hey @Laser_beam,
the SMALL_KEY and BIG_KEY define in ClientParameters.h are mostly dead code since we have improve our crypto parameters optimization. But in short those keys identifier was used when we had just a single bootstrap/keyswitch key set and as you may known we optimize crypto parameters for CJP (for the shortint encoding) dot product + ks + bs, so in this configuration with only one key the BIG_KEY identify the secret key that encrypts keyswitch inputs and bootstrap outputs ciphertext, where SMALL_KEY identify the secret key for the keyswitch output and bootstrap input.

1 Like

Thanks@yundsi
std::vector<uint64_t> chunkInput(uint64_t value, size_t size,
unsigned int chunkWidth) {
std::vector<uint64_t> chunks;
chunks.reserve(size);
uint64_t mask = (1 << chunkWidth) - 1;
for (size_t i = 0; i < size; i++) {
auto chunk = value & mask;
chunks.push_back((uint64_t)chunk);
value >>= chunkWidth;
}
return chunks;
}

can you also explain this chunkwidths where exactly it is used in computation of encryption or encoding or somewhere else?
I also am not sure if I know the exact formulas for auto encoding ,native encoding ( i found the crt encoding),can someone explain this or provide link where these are described?

please correct me here If i am wrong (the encryption implementation of concrete)after instantiating an encryption gate object ,it is used to build a circuit gate(passed as an arg),which is further used by lwe secret parameter function returning a secret key ,a cipher text allocation is done with allocate_lwe and then we use the _keyset.lwe.encrypt on allocated cipher text and inputs

This isn’t being used in ConcreteML, but can be used when manipulating the compiler. This feature could potentially be used later in CML.

What this feature does is to break an integer of N bits into N/Nchunk chunks of Nchunk (chunkwidths) bits, and of course rewrite the computation to work on chunks instead (why we see the increase of operations in the result MLIR below).

“What’s the purpose?” someone might ask: and it’s basically to use smaller parameters, thus faster execution. Even if we will perform much more operations, the cost of performing them can be lower than the initial computation for some use cases (computation/parameters).

On my local setup I can for example do:

$ concretecompiler --action=dump-fhe --chunk-integers laser.mlir

where laser.mlir contains:

func.func @add_eint(%arg0: !FHE.eint<8>, %arg1: !FHE.eint<8>) -> !FHE.eint<8> {
  %1 = "FHE.add_eint"(%arg0, %arg1): (!FHE.eint<8>, !FHE.eint<8>) -> (!FHE.eint<8>)
  return %1: !FHE.eint<8>
}

and the output will be:

module {
  func.func @add_eint(%arg0: tensor<4x!FHE.eint<4>>, %arg1: tensor<4x!FHE.eint<4>>) -> tensor<4x!FHE.eint<4>> {
    %c3 = arith.constant 3 : index
    %c2 = arith.constant 2 : index
    %c1 = arith.constant 1 : index
    %cst = arith.constant dense<[0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]> : tensor<16xi64>
    %c4_i5 = arith.constant 4 : i5
    %c0 = arith.constant 0 : index
    %0 = "FHE.zero"() {MANP = 1 : ui1} : () -> !FHE.eint<4>
    %1 = "FHE.zero_tensor"() {MANP = 1 : ui1} : () -> tensor<4x!FHE.eint<4>>
    %extracted = tensor.extract %arg0[%c0] {MANP = 1 : ui1} : tensor<4x!FHE.eint<4>>
    %extracted_0 = tensor.extract %arg1[%c0] {MANP = 1 : ui1} : tensor<4x!FHE.eint<4>>
    %2 = "FHE.add_eint"(%extracted, %extracted_0) {MANP = 2 : ui3} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %3 = "FHE.add_eint"(%2, %0) {MANP = 2 : ui3} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %4 = "FHE.apply_lookup_table"(%3, %cst) {MANP = 1 : ui1} : (!FHE.eint<4>, tensor<16xi64>) -> !FHE.eint<4>
    %5 = "FHE.mul_eint_int"(%4, %c4_i5) {MANP = 4 : ui11} : (!FHE.eint<4>, i5) -> !FHE.eint<4>
    %6 = "FHE.sub_eint"(%3, %5) {MANP = 5 : ui13} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %inserted = tensor.insert %6 into %1[%c0] {MANP = 5 : ui13} : tensor<4x!FHE.eint<4>>
    %extracted_1 = tensor.extract %arg0[%c1] {MANP = 1 : ui1} : tensor<4x!FHE.eint<4>>
    %extracted_2 = tensor.extract %arg1[%c1] {MANP = 1 : ui1} : tensor<4x!FHE.eint<4>>
    %7 = "FHE.add_eint"(%extracted_1, %extracted_2) {MANP = 2 : ui3} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %8 = "FHE.add_eint"(%7, %0) {MANP = 2 : ui3} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %9 = "FHE.apply_lookup_table"(%8, %cst) {MANP = 1 : ui1} : (!FHE.eint<4>, tensor<16xi64>) -> !FHE.eint<4>
    %10 = "FHE.mul_eint_int"(%9, %c4_i5) {MANP = 4 : ui11} : (!FHE.eint<4>, i5) -> !FHE.eint<4>
    %11 = "FHE.sub_eint"(%8, %10) {MANP = 5 : ui13} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %inserted_3 = tensor.insert %11 into %inserted[%c1] {MANP = 5 : ui13} : tensor<4x!FHE.eint<4>>
    %extracted_4 = tensor.extract %arg0[%c2] {MANP = 1 : ui1} : tensor<4x!FHE.eint<4>>
    %extracted_5 = tensor.extract %arg1[%c2] {MANP = 1 : ui1} : tensor<4x!FHE.eint<4>>
    %12 = "FHE.add_eint"(%extracted_4, %extracted_5) {MANP = 2 : ui3} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %13 = "FHE.add_eint"(%12, %0) {MANP = 2 : ui3} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %14 = "FHE.apply_lookup_table"(%13, %cst) {MANP = 1 : ui1} : (!FHE.eint<4>, tensor<16xi64>) -> !FHE.eint<4>
    %15 = "FHE.mul_eint_int"(%14, %c4_i5) {MANP = 4 : ui11} : (!FHE.eint<4>, i5) -> !FHE.eint<4>
    %16 = "FHE.sub_eint"(%13, %15) {MANP = 5 : ui13} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %inserted_6 = tensor.insert %16 into %inserted_3[%c2] {MANP = 5 : ui13} : tensor<4x!FHE.eint<4>>
    %extracted_7 = tensor.extract %arg0[%c3] {MANP = 1 : ui1} : tensor<4x!FHE.eint<4>>
    %extracted_8 = tensor.extract %arg1[%c3] {MANP = 1 : ui1} : tensor<4x!FHE.eint<4>>
    %17 = "FHE.add_eint"(%extracted_7, %extracted_8) {MANP = 2 : ui3} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %18 = "FHE.add_eint"(%17, %0) {MANP = 2 : ui3} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %19 = "FHE.apply_lookup_table"(%18, %cst) {MANP = 1 : ui1} : (!FHE.eint<4>, tensor<16xi64>) -> !FHE.eint<4>
    %20 = "FHE.mul_eint_int"(%19, %c4_i5) {MANP = 4 : ui11} : (!FHE.eint<4>, i5) -> !FHE.eint<4>
    %21 = "FHE.sub_eint"(%18, %20) {MANP = 5 : ui13} : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
    %inserted_9 = tensor.insert %21 into %inserted_6[%c3] {MANP = 5 : ui13} : tensor<4x!FHE.eint<4>>
    return %inserted_9 : tensor<4x!FHE.eint<4>>
  }
}
1 Like

We basically have two different encodings (chunked integers is a third one, but we aren’t yet considering it): native and CRT. Auto encoding just means we will leave the decision to the optimizer to choose which one to use. Native will keep the integer as is (one integer one ciphertext), the encoding will just put the message in the MSB bits of the ciphertext. CRT is to encode each integer using CRT.

1 Like

I’m not quite sure to understand. It can be helpful to link to the code that you are looking at at different stages

1 Like

Thanks @ayoub for your amazing explanation ,I was looking at this file(are there any test files which has tested execution of each of the defined functions?)

seed_lsb = seed & ((264) - 1)
seed_msb = (seed >> 64) & ((2
64) - 1)
I donot understand this seeding procedure (can you elaborate this)and why is this broken up into msb and lsb(is it because we cannot sample 128 bit key)

probably this is also one of my doubts
int64_t lweBufferSize(CircuitGate gate) {
assert(gate.encryption.has_value());
auto nbBlocks = gate.encryption->encoding.crt.size();
nbBlocks = nbBlocks == 0 ? 1 : nbBlocks;

auto param = lweSecretKeyParam(gate);
assert(param.has_value());
return param.value().lweSize() * nbBlocks;

}
what is this param.value() what does this return?

It return a LweSecretKeyParam struct. Here we want to get the crypto parameters of the secret key used for a specific CircuitGate which correspond to an input for example, this way, we can know the LWE size, of that specific CircuitGate as the number of elements in the buffer of a single ciphertext is the LWE size. We multiply by the number of blocks, which can be either 1, when we use native encoding, or the number of coprimes, when using CRT (as we will have an LWE ciphertext for every coprime).

This is just a technical trick to pass a 128-bit value. The two values will be joined when used. This 128-bit seed will be used in the CSPRNG for providing randomness.

1 Like

from mlir._mlir_libs._concretelang._compiler import (
KeySet as _KeySet,
)
Cell In[5], line 1
----> 1 from mlir._mlir_libs._concretelang._compiler import (
2 KeySet as _KeySet,
3 )

ModuleNotFoundError: No module named ‘mlir’
any way to resolve this import?

This is not the user API, but I guess you are trying to play around, right? How did you install the python package? If you build it from source, then you need to set the PYTHONPATH to the location of the package, and also load the runtime library using LD_PRELOAD. You can look at the Makefile for example usage of these two.

1 Like

Thanks I could import it now

suppose I have a circuit with “6 manp (maxi log2 norm2)” what should i understand from this?

— Complexity for the full circuit
1.459414e+07 Millions Operations(is this the count of total number of arithmetic operations?

cb_decomp l,b=7,4( I suppose this is circuit bootstrapping decomposition base and level)
pp_decomp l,b=3,13( I am not sure about this?)

2x glwe_dimension( How should i interpret this 2x?

will there be a conflict betwen global_p_error and p_error?( if we set both?)

what does it mean when we say "load the quantizers’?

I can give a quick answer for the "what does it mean when we say “load the quantizers’?” question: Concrete ML uses quantization to convert floating point models to integer models. Thus, inputs go through a quantization step and outputs are de-quantized.

When deploying a Concrete ML model, the client must perform the quantization since the server has an FHE circuit that can only operate on integers. Thus, when preparing a model for deployment Concrete ML saves quantization parameters in the client-side package. When an application wants to perform inference using the remote server, they must load the client quantization parameters.

Thanks @andrei-stoian-zama loading quantizers would mean loading quantized parameters not quantized weights if I understood correctly,Here is the exact context
def load(self): # pylint: disable=no-value-for-parameter
“”“Load the quantizers along with the FHE specs.”“”
client_zip_path = Path(self.path_dir).joinpath(“client.zip”)

    self.client = fhe.Client.load(client_zip_path, self.key_dir)

By the way how do we get key_director location to instantiate it?

It would mean loading the parameters that convert float -> int for the encrypted circuit input and int -> float for the output. The weights (trained parameters) are not available to the client.

1 Like

Thanks again for the clarfication,would you point me towards any link about info of type of quant parameters used( i mean a mathematical description)

Hello @Laser_beam,
You can find more information about quantization in our documentation page for this subject, as well as the associated references :

Hope this answers your question !

1 Like

When compiling from a dialect lower than FHE, one needs to provide encodings info manually to allow the client lib to be generated.Can we see this through an example of all generated encodings