I’m trying to find parameters for which I can do exact computations. For the lookup table to have an one-to-one relationship (so all possible values are encoded), I must have 2N approximate q so q/2N rounds to 1. But also, I must be able to do N cmuxes so I can bootstrap one LWE slot of the RLWE.

The problem is that for a 3 bit message and for a worst case error e, according to my estimates, N cmuxes will make the error go from e to N((k+1)le+e), so we need log_2(N( (k+1)le+e)) bits for the error, and since we need 3 for message and 1 for padding, I set q=log_2(N( 2le+e))+3+1, then

It seems that I can never make q/2N be close to 1 so I can never do a one-to-one lookup table for exact computations. But looks like you guys did it. I was wondering how.

by the way k=1 in this case, it’s the GLWE k parameter

If you want to do exact computations over 3-bit messages, we suggest that you take a look at the new concrete version. This offers an easy-to-use Rust API containing a large list of available functions for your homomorphic data type.

If you want to know the parameters that have been used, you can find them into the sources, in concrete-shortint/src/parameters/mod.rs. Each parameter set corresponds to an exact precision, with a failure probability of 2^(-13.9) for the bootstrap.

If you are interested about the theoretical noise analysis, you can find it into the paper, Appendix B.

The Learning With Error assumption relies on having a randomly chosen error (from a Gaussian distribution) added at the encryption step. If by any chance this error is too large, the correctness cannot be guaranteed. This has to be taken in account during the parameter choice. This is described page 39 of the previous paper, during the noise analysis of the Modulus Switching (the first step of a programmable bootstrapping).

when you say chance of failing, do you mean I get a wrong result instead of the exact operation? Do I know that it failed at least, or does it fail silently?

It fails silently with the given probability of error, the effect is that the result is no longer exact as it would have been in clear form.
To have a lower probability of error you need larger parameters and so it will lead to longer computation time (in some case it is even impossible to find parameters).
It is a tradeoff and btw it doesn’t fail in 99.99% of the cases.
Hope it helps.

ok, I’ve read that paper and it does not explain the reason why the bootstrap fails with probability, it only cites this fact. Is there a place there it explains the exact reason?