De-quantization of a linear model output in v0.6.1

Hi, I wrote a code that benchmarks the inference speed of a (trained) logistic regression model with help of many people on discord, and I found that this does not work anymore for v0.6.1 (I used v0.5.1 previously). Could someone help me to fix this? So the goal is to measure the time for quantization, encryption, inference, decryption, and de-quantization separately, not all at once.

model = ConcreteLR(n_bits=n_bits), y_train)

model_circuit = model.compile(X_train)

# Encrypted inference, one-by-one
y_pred_enc = []
for i in range(len(X_test)):
    clear_input = X_test[[i], :]

    quantized_input = model.quantize_input(clear_input)
    encrypted_input = model_circuit.encrypt(quantized_input)
    encrypted_output =
    decrypted_output = model_circuit.decrypt(encrypted_output)
    dequantized_output = model.quantized_module_.dequantize_output(decrypted_output)


The error message said that AttributeError: 'LogisticRegression' object has no attribute 'quantized_module_', when de-quantize. So it seems that the rest of the part still works well.
Also, let me know if there’s anything that I need to change for the better performance (especially for the very fast linear models). Thanks in advance.

Hello !
You just need to remove the quantize_module_ part and call dequantized_output = model.dequantize_output(decrypted_output) !

Besides, you should now be able to directly call dequantize_output(decrypted_output) with all linear and tree-based models. We’ll also make sure this API remains consistent over all of our built-in models in our next release :wink:

The rest of your code seems fine ! Also, Linear models can now handle a much larger precision. By default, we put 8 bits, which should be enough for matching results from Scikit-Learn. You could experiment with more but be aware that you’ll reach a limit where you won’t be able to compile the model anymore if this number gets too high :grin:

Hope this helps !


This works perfectly, thanks!