P_error in simulation mode

I’m trying to use concrete-ml 1.4 to compile my 2-layer mlp model(Linear + Activation + Linear). But I found that the performance of the same p_error in “execute” and “simulate” modes are different.

For fun, I set p_error=0.95 :joy: , n_bits=6, rounding_threshold_bits=8. Activation function is ReLU.
The accuracy is 85% in simulate mode but 75% in execute mode.
I also tried p_error=0.99, n_bits=6, rounding_threshold_bits=8. Activation function is Square(i.e. x*x).
The accuracy is 85% in simulate mode but 54% in execute mode.
p_error=0.95, n_bits=6, rounding_threshold_bits=8. Activation function is Square(i.e. x*x).
The accuracy is 85% in simulate mode and 84% in execute mode. (This result seems to be good :thinking:)

That’s strange, because the document says:

If the p_error value is specified and simulation is enabled, the run will take into account the randomness induced by the choice of p_error . This results in statistical similarity to the FHE evaluation.

Hi! You bring up a very good point about simulation, while it’s very flexible in allowing tests with very high p_error, having such values in practice is useless as it results in bad accuracy in FHE.

Maximum recommended p_error values should be around 0.1-0.2. There is a risk of accuracy loss with this setting which can be checked with simulation. In this case simulation should give a good picture of the accuracy one would obtain in FHE.

With the default, which is 2E-40 you basically have always exact computation.

With p_error values in between you don’t get always exact computation but you maintain accuracy while getting a speed-up in FHE.

There is usually a 2x-3x speedup between “always correct”/2E-40 and 0.1 p_error: Advanced Features - Concrete ML