How do I load the key and print the key Tensors( public ,private ,evaluation keys)?
In this setting I believe we are encrypting with the public key,Can we get a little more transparency of parameters generation from client specs?
It would be interesting to the see the underlying process inside using python front end code
Hello @Laser_beam ,
It seems that you have several questions here, which is great ! Let me try to answer them all :
How do I load the key and print the key Tensors( public ,private ,evaluation keys)?
It’s not possible to print the keys ! Out of curiosity, why would you want to print them ? If you really want to take a look at the keys, you still could replace the temporary directory paths from the OnDiskNetwork
class and then open the files.
In this setting I believe we are encrypting with the public key,
No ! Encryption is done with the private key only, on the client side !
Can we get a little more transparency of parameters generation from client specs?
Not exactly sure what you are referencing to here, wasn’t this solution enough ? If not, you could also try to set verbose=True
when compiling the model (like model_dev.compile(X_model_owner, verbose=True)
) !
It would be interesting to the see the underlying process inside using python front end code
Could you detail a bit more what you are seeking for ? Both Concrete ML and Concrete (Concrete Python / Concrete Compiler) are open-sourced, so feel free to have a look at their source code !
Hope this helps
Thanks @RomanBredehoft , But What I meant was will it be also possible to see(executing each steps) ,once we have the client parameters what is that underlying algorithm through which it generates the keys basing on the parameters
If we could have a more granular view of the process say for instance we could visualize the lwe tensor (a,b) ,is there where a,b are tensors of dimension n and 1 ,rather than just looking it just as object …It is around these themes I am little curious
Hi @Laser_beam ,
When you are using Concrete ML, you are using Python bindings (through pybind11) to the underlying C++/Rust libraries that implements crypto primitives. This bindings layers prevents you from accessing (easily) the underlying structure (although possible by inspecting memory via a debugger for example). So in Python, most of what you can do with the interpreter is holding the object that reference the low level structures. For an LWE ciphertext, we could have methods to return the body and mask for example, but we never saw a use case that could benefit from that.
If what you are trying to do is to manipulate ciphertexts and keys, while running the algorithms yourself, I would suggest using a lower level library such as tfhe-rs
Thanks @ayoub thanks for elaborating ,“You said we can access the body and the mask but how?”
Great I see that you are using bindings to bridge between low level structures and python front end,but I want to study If I do manual instantiations of parameters ,the variability of model performances ( can I really do that ?)(may be how its impacting number of leveled operations for deep learning models?)
But how do I manually instantiate tfhe parameters,quant parameters sample noise ,manually chose moduli ,crt parameters…and do other things within the pipeline and finally observe the effects on models
All in all an attempt to investigate ,inspect and enhance concrete ML in any possible ways if I can
What I meant by “we could have methods to return the body and mask” is that someone can implement those methods, but they aren’t available right now: meaning this isn’t technically impossible.
What you are describing is a much more experimental setup than what Concrete is providing. What you are listing (e.g. TFHE parameters, CRT, quantization…) are all related, and changing one requires looking at the other again. Concrete is more a framework that aim to guarantee correctness and maximum performance by choosing all the parameters you mentioned, so the ability to manually set those is out of the project scope.
If I’m understanding you correctly and what you are trying to do is to be able to do experimentation more than what Concrete provide via its different compilation/execution options, then you should be looking at the lower level libraries (either the Concrete Compiler, or TFHE-rs)
Little sad to hear that degree to which we can exercise control is very less,I wanted to play with the parameter sets like in Tenseal ,but I see that it is so difficult to even visualize an lwe body and mask ,probably I am novice regarding usage of concrete,I hope for some more assistance in this regard ,because I believe the more a library goes through tests the better are its options to improve
I also need to confirm one more thing about compilation flow if i correctly understand it …
We start with python front end .—> It generates a DAG—> we get an MLIR out of this DAG( How?)---->it is being passed to concrete optimizer for choice of parameter----> optimized mlir is next passed to concrete compiler which uses fhe dialects(defined under concret lang) of all kinds to operate on the"mlir" ---->we generate the results( I dont understand how is mlir convereted back to a result in a python front end)(please fill in the missing links or incorrectness in understanding)
I also have a question on what is server lambda function?
suppose I have a model defined…at some point i believe the server executes it as a lambda function…( can I have an explanation of how do we generate a univariate function from a model?( because it has to be converted to a circuit)
We have made our Concrete tools for developers (data scientists, python developers, eg), and not for cryptographers. Our tools are not intended to be complicated, so we hide all the complexity and the dangerous things (if one set parameters badly, it can be very insecure, eg).
If you want to play with the internals, as said in this thread, it’s better to have a look to the low-level functions of TFHE-rs, where you can play eg with the cryptographic operators like KS, PBS etc.
If you want to know a bit how the compiler works, we’ve made a blog about it, and further blogs are coming. Also, I guess some of the information are available in the documentation, eg its Explanation chapter.
server_lambda is just a naming we have used to represent the loaded function to be executed. It’s an object that holds a reference to the entrypoint in the compiled library (.so file on Linux). It enables you to make calls to that function
How to get the location of key_dir?