Concrete-ML offers to the user the possibility to easily encrypt the inputs, run the inference and decrypt the outputs. For all available models, these steps are gathered in a single method predict
, following the same API as provided in the scikit-learn library. Here’s an example using a LogisticRegression
model:
from concrete.ml.sklearn import LogisticRegression
# Instantiate and train
model = LogisticRegression()
model.fit(x,y)
# Compile
model.compile(x)
# Get the model's class predictions with FHE execution
y_pred = model.predict(x, execute_in_fhe=True)
As with scikit-learn, it is possible to predict the logits as well as the class probabilities by respectively using the decision_function
or predict_proba
methods instead. Note that for now, only the matrix multiplication is executed in FHE, following post-processing steps (sigmoid, argmax) are done in the clear.
Additionally, it is possible to execute each step (quantization, key generation, encrypt, run, decrypt) separately :
from concrete.ml.sklearn import LogisticRegression
import numpy
# Instantiate and train
model = LogisticRegression()
model.fit(x,y)
# Compile
fhe_circuit = model.compile(x)
# Keygen
fhe_circuit.keygen()
# Quantize an input and encrypt
input = x[:1]
q_input = model.quantize_input(input)
q_input_enc = fhe_circuit.encrypt(q_input)
# Execute the inference in FHE
q_y_enc = fhe_circuit.run(q_input_enc)
# Decrypt and dequantize to get logits
q_y = fhe_circuit.decrypt(q_y_enc)
y_logit = model.dequantize_output(q_y)
# Apply the sigmoid to get probabilities (in the clear)
y_proba = model.post_processing(y_logit, already_dequantized=True)
# Apply the argmax to get class predictions (in the clear)
y_pred = numpy.argmax(y_proba, axis=1)