Loading all the keys

How do I load the key and print the key Tensors( public ,private ,evaluation keys)?
In this setting I believe we are encrypting with the public key,Can we get a little more transparency of parameters generation from client specs?
It would be interesting to the see the underlying process inside using python front end code

Hello @Laser_beam ,
It seems that you have several questions here, which is great ! Let me try to answer them all :

How do I load the key and print the key Tensors( public ,private ,evaluation keys)?

It’s not possible to print the keys ! Out of curiosity, why would you want to print them ? If you really want to take a look at the keys, you still could replace the temporary directory paths from the OnDiskNetwork class and then open the files.

In this setting I believe we are encrypting with the public key,

No ! Encryption is done with the private key only, on the client side !

Can we get a little more transparency of parameters generation from client specs?

Not exactly sure what you are referencing to here, wasn’t this solution enough ? If not, you could also try to set verbose=True when compiling the model (like model_dev.compile(X_model_owner, verbose=True)) !

It would be interesting to the see the underlying process inside using python front end code

Could you detail a bit more what you are seeking for ? Both Concrete ML and Concrete (Concrete Python / Concrete Compiler) are open-sourced, so feel free to have a look at their source code !

Hope this helps :slightly_smiling_face:

1 Like

Thanks @RomanBredehoft , But What I meant was will it be also possible to see(executing each steps) ,once we have the client parameters what is that underlying algorithm through which it generates the keys basing on the parameters

If we could have a more granular view of the process say for instance we could visualize the lwe tensor (a,b) ,is there where a,b are tensors of dimension n and 1 ,rather than just looking it just as object …It is around these themes I am little curious

Hi @Laser_beam ,

When you are using Concrete ML, you are using Python bindings (through pybind11) to the underlying C++/Rust libraries that implements crypto primitives. This bindings layers prevents you from accessing (easily) the underlying structure (although possible by inspecting memory via a debugger for example). So in Python, most of what you can do with the interpreter is holding the object that reference the low level structures. For an LWE ciphertext, we could have methods to return the body and mask for example, but we never saw a use case that could benefit from that.

If what you are trying to do is to manipulate ciphertexts and keys, while running the algorithms yourself, I would suggest using a lower level library such as tfhe-rs

1 Like

Thanks @ayoub thanks for elaborating ,“You said we can access the body and the mask but how?”
Great I see that you are using bindings to bridge between low level structures and python front end,but I want to study If I do manual instantiations of parameters ,the variability of model performances ( can I really do that ?)(may be how its impacting number of leveled operations for deep learning models?)
But how do I manually instantiate tfhe parameters,quant parameters sample noise ,manually chose moduli ,crt parameters…and do other things within the pipeline and finally observe the effects on models

All in all an attempt to investigate ,inspect and enhance concrete ML in any possible ways if I can

What I meant by “we could have methods to return the body and mask” is that someone can implement those methods, but they aren’t available right now: meaning this isn’t technically impossible.

What you are describing is a much more experimental setup than what Concrete is providing. What you are listing (e.g. TFHE parameters, CRT, quantization…) are all related, and changing one requires looking at the other again. Concrete is more a framework that aim to guarantee correctness and maximum performance by choosing all the parameters you mentioned, so the ability to manually set those is out of the project scope.

If I’m understanding you correctly and what you are trying to do is to be able to do experimentation more than what Concrete provide via its different compilation/execution options, then you should be looking at the lower level libraries (either the Concrete Compiler, or TFHE-rs)

Little sad to hear that degree to which we can exercise control is very less,I wanted to play with the parameter sets like in Tenseal ,but I see that it is so difficult to even visualize an lwe body and mask ,probably I am novice regarding usage of concrete,I hope for some more assistance in this regard ,because I believe the more a library goes through tests the better are its options to improve

I also need to confirm one more thing about compilation flow if i correctly understand it …
We start with python front end .—> It generates a DAG—> we get an MLIR out of this DAG( How?)---->it is being passed to concrete optimizer for choice of parameter----> optimized mlir is next passed to concrete compiler which uses fhe dialects(defined under concret lang) of all kinds to operate on the"mlir" ---->we generate the results( I dont understand how is mlir convereted back to a result in a python front end)(please fill in the missing links or incorrectness in understanding)

I also have a question on what is server lambda function?
suppose I have a model defined…at some point i believe the server executes it as a lambda function…( can I have an explanation of how do we generate a univariate function from a model?( because it has to be converted to a circuit)

We have made our Concrete tools for developers (data scientists, python developers, eg), and not for cryptographers. Our tools are not intended to be complicated, so we hide all the complexity and the dangerous things (if one set parameters badly, it can be very insecure, eg).

If you want to play with the internals, as said in this thread, it’s better to have a look to the low-level functions of TFHE-rs, where you can play eg with the cryptographic operators like KS, PBS etc.

If you want to know a bit how the compiler works, we’ve made a blog about it, and further blogs are coming. Also, I guess some of the information are available in the documentation, eg its Explanation chapter.

1 Like

server_lambda is just a naming we have used to represent the loaded function to be executed. It’s an object that holds a reference to the entrypoint in the compiled library (.so file on Linux). It enables you to make calls to that function

1 Like

How do the client verify that he is indeed getting 128 bits of security or the bits of security associated with his compiled circuit, Just a confusion does the client spec file also contains the exported artifacts file?

Was executing the lines in this blog

result = compiler.ClientSupport.decrypt_result(key_set, public_result)


TypeError: decrypt_result() missing 1 required positional argument: 'public_result'

I was just verifying the steps

it’s a question for @ayoub but it’s not Concrete ML, it’s Concrete.

First, the compiler API is an internal API and isn’t really meant for usual developers to use (but for the curious ones :slight_smile: ), and it’s not as stable as the public API. This function has been since updated to include a client_parameters for decryption as you can see here.

1 Like

Please would you try it once in your system and check if it works properly

The current one doesn’t work, because of the API change I pointed out in my last message. You can make it work by updating it to result = compiler.ClientSupport.decrypt_result(client_parameters,key_set, public_result). We will also update our blog post.

1 Like

Its working fine ,thanks @ayoub

Once we generate the .so files (can we execute it through some terminal commands),is this shared library object obtained through compilation of some cpp file?please provide some clarity on this

and looking forward for this vedio

If anyone would like to answer this :
1)generally in client server setting (concrete ML)we send the evaluation keys which is one binary octet file ,Do we concatenate the all the ev .key( ksk,pbs key,pack. ksk)?
2)why do we generate two secret keys ( one lwe and other glwe ,i think they are named as small and big key probably)(<# no of sk should be one right>)?

The so file comes from the compilation of the MLIR program (which is usually generated by the frontend from the python function you want to compile). The file contains executable code, thus can be loaded in memeory, and executed through the entrypoint function. The execution can be done via python using helper functions, but can also be done via Cpp (this isn’t how we tell users to do things, but it’s doable).

1 Like

@ayoub can you tell me how do i load the binary files in read mode of type: Binary (application/octet-stream)
and .ekl files ( these are serialized byte files)

I tried this but kernel dies

from concrete.fhe.compilation import Keys
from pathlib import Path
file_path = ‘./key’
location =Path(file_path)
keys = Keys.deserialize(bytes(location.read_bytes()))