Quantization only works for input shape of (1, ., .)

Hello,

I ran through something interesting while trying to quantize a custom onnx model using the compile_onnx_model function.

TL;DR
quantization only works for input shape of (1, ., .)

How did I end up there
I am currently doing a project where i customize the resnet18 model, so that i delete everything below the last relu and add a new output right after it.

At this point my custom resnet takes as input tensor: float32[N,3,224,224] and outputs tensor: float32[N,512,7,7].

Then, in python, I run the following code:

# load onnx & update opset to 14
model = load_model(RESNET18_V1_WOGAP_FILE)
model = version_converter.convert_version(model, 14)

# quantize model
# I tried many possibilities for calibration_input_set (numpy, tensor, different dimensions)
calibration_input_set = torch.FloatTensor(100, 3, 224, 224).uniform_(-100, 100)
qmodel = compile_onnx_model(
    model,
    calibration_input_set,
)

Then when I run the code I get the error:

ValueError: Got 1 inputs, expected 101. Either the quantized module has not been properly initialized or the input data has been changed since its initialization.

After a bit of research, i found that it was due to the call to quantize_module, in the build_quantize_module function (cf: source code):

# Build the quantized module
# TODO: mismatch here. We traced with dummy_input_for_tracing which made some operator
# only work over shape of (1, ., .). For example, some reshape have newshape hardcoded based
# on the inputset we sent in the NumpyModule.
quantized_module = post_training_quant.quantize_module(*inputset_as_numpy_tuple)

Questions
I have 2 questions stemming from this:

  1. What are the consequences of this? I couldn’t find any related issue.
  2. Is There a way to bypass this input shape restriction?

Thanks in advance,

You are correct and indeed this is a limitation of Concrete ML in the way we import ONNX. While it is possible to have dynamic shapes of models in ONNX, they need to be exported in a way that marks some dimensions as dynamic. This is not very common, but in any case, it is not supported by Concrete ML.

I’m not sure if I can provide a workaround, it would be easier if you could provide your code and onnx model. Could you try, as a first idea, to supply your input set as a list of tensors of batch-size=1, like

calibration_input_set = [torch.FloatTensor(1, 3, 224, 224).uniform_(-100, 100) for _ in range(100)]

Thanks for the quick response!

First, here is the requested resources:

  1. requirements.txt: libraries installed in my python venv
  2. the code
  3. the custom onnx model (i uploaded a tar as well as the raw onnx file)

Now, you said in your response that

it is possible to have dynamic shapes of models in ONNX, [but] they need to be exported in a way that marks some dimensions as dynamic.

As I am not very familiar with machine learning, I am not really sure to understand what you mean by that. Do I have to modify my model so that the input is of shape (1, ., .) ? Or should I only modify the calibration_input_set dimension (for now)?

Finally, I tried with the calibration_input_set your suggested, which also result in an error:

AttributeError: 'list' object has no attribute 'cpu'

which is due to torch_tensor_or_numpy_array_to_numpy_array only acknowledging either numpy ndarrays or torch tensors parameters.

You are trying to import a ResNet 18 ONNX using the post-training quantization flow. I think that is not the most straightforward way to get a ResNet to compile to FHE. I suggest you start with the pytorch Torchvision ResNet18 model. You can then use compile_torch_model which will not have the shape issue.

Thanks for your response!

I tried importing ResNet18 straight from torchvision models, but I am facing another issue (which lead me to customize the model): GlobalAveragePool operator is not implemented in concrete. To be exact: the creation of the numpy module fails because of the lack of an equivalent operator.

I am considering two workarounds (as I only want some bench results about some parts of ResNet18):

  1. Either I replace GlobalAveragePool by AveragePool with the right padding preset
  2. Or I create the same model as I previously had in ONNX, but in pytorch (ResNet18 without the part below the last relu)

As I am not really confident doing the first option, I will most likely do the later. I will update this post as soon as I have some results.

Hello @Simon ,

Thanks for the feedback!

In your case I would suggest option 1 with some reshaping of the tensor due to the fact that you are not using a Global operation.
We added GlobalAveragePool to the list of operators in our backlog.

Don’t hesitate to follow-up here if you have any other question. :slightly_smiling_face: