Zama FHE future research direction

Hi guys, I just have some confusion after a month of working with Zama Concrete as well as Concrete ML. Why do Zama’s current open sources and projects only focus on the TFHE technique? Will the team consider other FHE methods like CKKS in the future thus TFHE only support interger interaction?

Hello.

One month? Awesome, what do you think about the product? What have you been able to build, here? What’s missing for your use-case?

For your question: for now, Zama is only about TFHE, indeed. TFHE is a scheme which is able to handle all the operations, thanks to its unique programmable bootstrapping (PBS), which is the equivalent of table lookup in FHE. Also, TFHE is exact: the results you have in FHE are exactly the results you have in the clear. It’s very convenient for developers. With CKKS, you have approximate computations, which is less convenient and can bring some troubles in some applications.

What’s the problem you have with TFHE? If it’s related to non-native compatibility with floats, there are a lot of solutions for that. Eg, in Concrete ML, we handle the quantization of models, weights, inputs etc directly under the hood, so things should be pretty straight forward for the user. Basically, a user of Concrete ML will most of the time not even have to know that quantization happens.

Does it answer your question?

1 Like

Hi benoit,

Thank you for your answer.

It just came to my mind that if we work with some complex dataset or advanced ML models, the accuracy and precision may be badly affected if we do quantization or pruning thus for me, quantization is just a solution but not perfect for us to work with integers in FHE ( especially TFHE). Because of those, I am thinking of CKKS which can process the float, but of course, it is just the approximate approach :sweat_smile:.

Anyway, thank you for your confirmation, it helps me a lot.

Even with floats, there is a limited precision! And as you said, when we have approximate computations, it also reduces the precision.

I wouldn’t say quantization is bad per se: it’s used in ML field, independently of FHE, for a very long time, eg to compress models or to make them faster. And it’s a fruitful area of research: researchers are able to have excellent results, even by quantizing a lot!

What we have done in Concrete ML is to split the work and responsabilities:

  • everything which is related to cryptography is done by the product, the user doesn’t have to take care of it;
  • on the other side, what is related to ML, and in particular quantization for deep learning, is sometimes managed by the user, who can do better work than what could be done by the compiler. Indeed, the user can do modifications that the compiler would not be allowed / able to do (like changing layers or making quantization-aware training).

We kept things simple for ML models (linear models, tree-based models) for the users, by making the best choice for them. And we let the flexibility for DL models (torch-like models), to let them make these best choices.

1 Like

Can I read and analyze the encryption part in the source code on Github? Since I quite want to take a deeper look at FHE cryptography :grinning:

  • Of course you can, everything we do is open source: Zama · GitHub
  • Let’s keep one question per thread, for clarity :slight_smile:

So, to continue, I encourage you to ask a few more details in another post in discourse, or to chat live with developers in discord.fhe.org. See you there

1 Like

Sure :slightly_smiling_face: thank you for supporting me. Your answer helps me a lot

1 Like