Accessing intermediate layers after compilation

Can we access the intermediate layers of our model after compilation(using brevitas qat or post quant.)? If yes how?

Can we check the shape ,do slicing operations on encrypted values(which is a value object)?

Hello @Laser_beam ,

There is no way at the moment to get intermediate results of your model after compilation.
What you can do though is to set the intermediate steps that you need in the return of the function you are compiling to see what they were.

You should be able compilation brevitas models with shapes and slicing operations but the compilation is static in the sense that a circuit is compiled for a given input data-shape.
If you need to handle multiple shapes you will need to compile multiple times your circuit.

1 Like

Thankyou @luis I had a followup question
after i have trained and compiled a model ,do we have anything in concrete ML like< model.summary() > so that I can see the trainable parameters

something like
model = MyModel()
total_params = sum(p.numel() for p in model.parameters())
print(f"Number of parameters: {total_params}")

Well actually the model that will run in FHE is the same as your Torch/Keras model so you can just call the method on your object.

Unless you are talking about one of our builtin neural networks in this case you can look at the underlying sklearn_model attribute.

1 Like

clear_prediction_classes = model_dev.predict_proba(X_client).argmax(axis=1)
this is a small segment from this https://github.com/zama-ai/concrete-ml/blob/release/1.1.x/docs/advanced_examples/ClientServer.ipynb

I had a simple doubt .predict_proba(X_client).argmax(axis=1) this predict_prob will not work for a quantized module ,I tried it says “Quantized module has attribute predict_prob” ,what method to use in this case where we are not using built in models rather quantized module

Hello @Laser_beam,
You are right, QuantizedModule objects don’t have a predict_proba method like built-in models do (like Scikit-Learn models), they follow Torch’s API and thus have a forward method instead in order to execute the inference. Nonetheless, it still provides a fhe parameter, which can be set to “disable” (clear execution), “simulate” (FHE simulation) or “execute” (FHE execution).

Hope this helps !

1 Like