Concrete-ML deep learning benchmark

I tried to run the deep learning benchmark from concrete ML and got this error:

Sample #1 (over 1) of ShallowNarrowCNN_2 on MNIST
-------------------------------------------------
Loading MNIST_ShallowNarrowCNN.pt

test1
test2
Process (Sampler):
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.8/dist-packages/py_progress_tracker/track.py", line 136, in __call__
    main(**parameters)
  File "deep_learning.py", line 935, in perform_deep_learning_benchmark
    evaluate_pre_trained_cnn_model(dataset, cnn_class, config, args)
  File "deep_learning.py", line 663, in evaluate_pre_trained_cnn_model
    cnn_model = load_pre_trained_cnn_model(cnn_model, dataset)
  File "deep_learning.py", line 325, in load_pre_trained_cnn_model
    state_dict = torch.load(pt_file)
  File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 795, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1002, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.
-------------------------------------------------

ShallowNarrowCNN_2 on MNIST over 1 samples
------------------------------------------
Not Working
------------------------------------------

test1 and test2 are just some strings that I printed for debugging purposes. To be more specific, the problem is happen here:

    cnn_model.toggle_pruning(False)
    print("test1")

    # Load the weights and update the model
    print("test2")
    state_dict = torch.load(pt_file)
    print("test3")

Does anyone know what is happening here? Could it be that the torch-pretrained model is corrupted?

Hi Dwen, I think you’re right about this, the pytorch serialized model is probably out of date.

Thanks for pointing this out!

If you only need to test inference speed and you don’t need to benchmark accuracy you can just remove the line that loads the saved weights. You’ll then be running a randomly initialized model which will have the same exact inference time as a trained model.

1 Like

Hello Dwen,
Could you pull LFS files using git lfs pull and try again ? This may fix your issue !

1 Like

Hello,

Sorry for the late reply. I have tried it, and it still doesn’t work. However, your suggestion is working for other examples in the use_case_examples folder. Thank you for your help

Hello @Dwen !
Great to know it works for other examples. However, if I understand correctly, you still get your error in benchmarks right ?