I’m trying to use concrete-ml 1.4 to compile my 2-layer mlp model(Linear + Activation + Linear). But I found that the performance of the same p_error in “execute” and “simulate” modes are different.

For fun, I set p_error=0.95 , n_bits=6, rounding_threshold_bits=8. Activation function is ReLU.

The accuracy is 85% in simulate mode but 75% in execute mode.

I also tried p_error=0.99, n_bits=6, rounding_threshold_bits=8. Activation function is Square(i.e. x*x).

The accuracy is 85% in simulate mode but 54% in execute mode.

p_error=0.95, n_bits=6, rounding_threshold_bits=8. Activation function is Square(i.e. x*x).

The accuracy is 85% in simulate mode and 84% in execute mode. (This result seems to be good )

That’s strange, because the document says:

If the

`p_error`

value is specified and simulation is enabled, the run will take into account the randomness induced by the choice of`p_error`

. This results in statistical similarity to the FHE evaluation.