Cifar-10 Training & Quantization

I am trying to reproduce your tutorial here.

I trained the network,

python3 bnn_pynq_train.py --data ./data --experiments ./experiments

and it evaluates to 87%.

python3 bnn_pynq_train.py --evaluate --resume ./experiments/CNV_2W2A_2W2A_20230807_150413/checkpoints/best.tar

but when simulating or performing quantization, accuracy drops to around 26%.

#modified weights directory in .py
python3 evaluate_torch_cml.py --rounding_threshold_bits 1 2 3 4 5 6 7 8

Is there a quantization/fine-tuning step missing from the tutorial?
My weights file and copy of .py files are located here.

1 Like

Hello @lfolkerts ,

Thanks for letting us know, we are looking into it. :slightly_smiling_face:

We’ll come back to you next week

Hello @lfolkerts ,
We have checked the tutorial on our side and there is indeed a problem with the rounding feature here. We need to further investigate the issue and we will come back to you once we know more about it.

Thanks a lot for the report !

Hello @lfolkerts ,
Good news, we found out the reason why the top1 accuracy was so low ! Basically, we were not using the best input-set when compiling the model(s). If you clone the Concrete ML repository again at the Cifar training example, you should be able to get the right results back. For example, you’ll reach a top1 accuracy of 86% accuracy with 6 bits rounding.

Besides, we noticed a small accuracy drop (~1.5% when using 6 bits rounding) compared to what we state in the readme with the rounding approach and the old input-set. We are still investigating this !

Thanks for the report :wink:

Hello again @lfolkerts ,
This is just to tell you that we have updated the expected accuracy scores with roundings bits. Previous ones were a bit better but that was thanks to a bug that has been fixed since – and as you have rightly pointed out, it was not working anymore with FHE simulation and computation.

Thanks again for the report and keep us updated on your great project !

1 Like