What is the relationship between the three libraries TFHE-rs, Concrete and Concrete ML?

I really appreciate your work. I am trying to enter this field. There are several questions here to consult.

  1. When I analyzed evaluate_one_example_fhe.py in Concrete ML, I found that the run method of ServerCircuit(WrapperCpp) took the most time, and I wanted to fine-grained the TFHE scheme itself, so I checked the concrete library and then TFHE-rs. I don’t understand how concrete calls TFHE-rs. I don’t understand how Concrete ML calls Concrete. I wish there was a diagram to explain this. Sorry I’m not familiar with rust.

2.What are the underlying operators of the TFHE algorithm? Just like the underlying operator of CKKS is NTT/INTT. In other words, what are the performance bottlenecks of TFHE-rs? I want to do some work on this and test it in ML inference.

3.If possible, help me point out which function takes the most time in the TFHE-rs library.

I browsed through all your website content and I thought it was really cool.

Hello

Yes, Concrete ML uses Concrete: Concrete ML prepares some python code, than Concrete will compile to FHE.

And yes, Concrete uses TFHE-rs: TFHE-rs is our library for cryptographic operations, and so Concrete makes calls to it to take care of FHE programs. It’s in Concrete-CPU backend (for what’s related to CPU)

In TFHE, you have mainly

  • linear operations: additions, subtractions, multiplications by integers: they are called leveled operations and are very fast
  • table-lookup based operations (TLU): basically, anything which is non linear will be replaced by a TLU. This stands for min / max, activations, etc. The corresponding operation in FHE is called “programmable bootstrapping” (PBS).

Under the hood, yes, as in CKKS, they are polynomial multiplications in TFHE as well, so NTT/FFT are used / critical for speed in both schemes.

PBS are

  • unique to TFHE: in CKKS, replacing non-linear function is done with approximations and polynomials
  • a way to reduce noise, which allows to go as deep as we want in TFHE, eg for deep learning
  • costly, so they are the bottleneck: we try to reduce their number, their bitwidth. See Table lookups (basics) | Concrete

Documentation of Zama products is easy to find: doc. Here you’ll find Concrete, Concrete ML and TFHE-rs

Cheers

1 Like

thank you very much :grinning: :grinning: :grinning:

1 Like