Some basic questions

Q1)What are dialects and how is it used in the MLIR setting?
Q2)Can you explain simply the difference between traditional compilation process vs using dialects?
Q3)After producing a MLIR representation (a dag) ,its compiled to set of artifacts ,what’s meant by artifacts here?
Q4)How does concrete compiler use the backend(module) for compilation? How does the front end python program and backend actually connected in the end to end compilation (it uses rust)?
Q5)What’s the task of concrete optimizer, Please elaborate on this?
Q6) self._keyset = ClientSupport.key_set(
self.client_specs.client_parameters,
self._keyset_cache,
seed_msb,
seed_lsb,
)
The keyset is generated by calling the key_set method from the ClientSupport,can you describe the method and also provide the link to the method,
Here what is the role of seed_msb and seed_lsb?
Q7)In the compile method of concrete ML ,If i am not wrong we prepare the inputs then generate a proxy function and then create a compiler instance which further converts it into FHE circuit.Whats proxy function here?How do we generate a proxy function from the defined model?
Q8) What is in clientspecs file?
Q9)How does concrete ML select the optimal cryptographic parameter’s for a particular model?
Q10)Is there any feature for using encrypted results from a circuit as inputs in another circuit ?

Will anybody help with this questions? @andrei-stoian-zama @RomanBredehoft @umutsahin and others

Hello Rish,

I’ll start with the first ones:

Q1) maybe you could have a look at MLIR FHE Dialects - Concrete that was recently added to the doc and explains all dialects created by concrete and how we use them.

Q2) for this one you could read the compilation process https://docs.zama.ai/concrete/explanations/compilation using dialects is usually the intended way of solving domain-specific constraints but it’s not specific to our compiler, it’s just we use these dialects to solve the FHE constraints.

Q3) artifacts means that you don’t only get the binary file, in FHE the binary output works on encrypted data, and you need to know how to encode and encrypt them.

Q4) the backend is responsible of the cryptographic functions execution, it’s a way of supporting HW acceleration.

Q5) optimizer choses automatically the best security, correctness and performance parameter for your input program, you could read more here: Automatic Crypto Parameters choice - Concrete

1 Like

Let me extend the answer by @alex :slightly_smiling_face:

Q4)How does concrete compiler use the backend(module) for compilation? How does the front end python program and backend actually connected in the end to end compilation (it uses rust)?

Compiler has Python bindings generated using GitHub - pybind/pybind11: Seamless operability between C++11 and Python. See https://github.com/zama-ai/concrete/tree/main/compilers/concrete-compiler/compiler/lib/Bindings/Python.

Q6) self._keyset = ClientSupport.key_set(
self.client_specs.client_parameters,
self._keyset_cache,
seed_msb,
seed_lsb,
)
The keyset is generated by calling the key_set method from the ClientSupport ,can you describe the method and also provide the link to the method,
Here what is the role of seed_msb and seed_lsb?

Here are the relevant links:

You can find the description in the source.

Role of seed_msb and seed_lsb is to seed random number generator during key generation. Seed is 128 bits, but since it’s not a standard type in C++, it needs to be split into to 64 bit numbers.

Q8) What is in clientspecs file?

Just client parameters in the latest main. Previously, it had some additional information related to signedness of input/outputs.

Q9)How does concrete ML select the optimal cryptographic parameter’s for a particular model?

Concrete ML is using Concrete, so optimal cryptographic parameter’s are selected by Concrete Optimizer. See https://github.com/zama-ai/concrete/tree/main/compilers/concrete-optimizer.

Q10)Is there any feature for using encrypted results from a circuit as inputs in another circuit ?

Not at the moment, it’s in our long term roadmap.

2 Likes

Indeed we use a proxy function that is then compiled using Concrete Python.
The idea is that we have a torch or scikit-learn model that is converted to a onnx graph that is converted to a custom QuantizedModule object that itself has a function that is compiled.

line 620 of quantized_module.py

forward_proxy, orig_args_to_proxy_func_args = generate_proxy_function(
    self._clear_forward, self.ordered_module_input_names
)

I hope that answers your question but feel free to precise your question if something isn’t clear :slightly_smiling_face:

1 Like