No module named "model"

no module named “model”

Hello,
Would be it possible to indicate to us what you are trying to do, by giving us part of the code you are trying to run for example ? Also, which Concrete ML version are you using ?

Thanks !

i pull the latest from docker, and i am running the mnist_fhe.ipynb

Hello,

Thank you for reaching out to us.

We have checked the recent public version and were unable to replicate your error.

To assist you, could you please ensure that “model.py” and “mnist_in_fhe.ipynb” are located within the same folder, specifically in “./use_case_examples/mnist/”?
Additionally, it would be helpful if you could provide some information on how you are running the notebook. For instance, are you using IPython?

For your convenience, we recommend following these steps:

git clone https://github.com/zama-ai/concrete-ml.git
cd concrete-ml
python3.8 -m venv .venv
source .venv/bin/activate
make sync_env

Please let us know if you have resolved the error you encountered.

Thanks !

I’m sorry that I didn’t see any information before. I am a novice, how can I use concrete to train networks and test them using techniques such as PBS

Hello,

No problem, we are glad to help you get started with Concrete ML!

Concrete ML is an open-source, privacy-preserving machine learning framework based on FHE.

The API library is similar to Scikit-learn, with a few subtleties to consider, such as: bit-width limitations, compilation… In addition, you don’t need any knowledge in cryptography, the implementation of PBS is implemented under Concrete ML.

To set up your environment, you can follow the steps outlined in my previous message.

To begin, I highly recommend starting with the notebook located in the docs/advanced_examples directory.

For more advanced examples, such as training/fine-tuning deep learning models, deploying models, you can find them in the use_case_examples directory.

Let us know if you need any further assistance!

Thanks!

thanks a lot.
but when i run “checkpoint = torch.load(“state_dict.pt”)” intuse_case_examples/mnist/mnist_in_fhe.ipynb. it occurs “UnpicklingError: invalid load key, ‘v’.”. do you know how to solve it ?

Hello,

Many thanks for pointing out this issue.

I would like to confirm if you attempted to load the model without running the training cells.

Regarding your error, it’s related to git lfs, try to run git lfs pull --include use_case_examples/mnist/state_dict.pt.

But inevitably, you will face the following issue:

RuntimeError: Error(s) in loading state_dict for MNISTQATModel:
Missing key(s) in state_dict: "fc1.weight_orig", "fc1.weight_mask", "fc2.weight_orig", "fc2.weight_mask", "fc3.weight_orig", "fc3.weight_mask", "fc4.weight_orig", "fc4.weight_mask". 
Unexpected key(s) in state_dict: "fc1.weight", "fc2.weight", "fc3.weight", "fc4.weight".

Here is a quick workaround for you until we solve the problem:

  • Option 1: execute the training cells
  • Option 2: execute the following cell
model = MNISTQATModel(quantization_bits, quantization_bits)
model = model.to(device)
model.state_dict().keys()
# Output:
> odict_keys(['fc1.weight', 'bn1.weight', 'bn1.bias', 'bn1.running_mean', 'bn1.running_var', 'bn1.num_batches_tracked', 'fc2.weight', 'bn2.weight', 'bn2.bias', 'bn2.running_mean', 'bn2.running_var', 'bn2.num_batches_tracked', 'fc3.weight', 'bn3.weight', 'bn3.bias', 'bn3.running_mean', 'bn3.running_var', 'bn3.num_batches_tracked', 'fc4.weight'])

checkpoint = torch.load("state_dict.pt")
checkpoint.keys()
# Output:
> odict_keys(['fc1.weight', 'bn1.weight', 'bn1.bias', 'bn1.running_mean', 'bn1.running_var', 'bn1.num_batches_tracked', 'fc2.weight', 'bn2.weight', 'bn2.bias', 'bn2.running_mean', 'bn2.running_var', 'bn2.num_batches_tracked', 'fc3.weight', 'bn3.weight', 'bn3.bias', 'bn3.running_mean', 'bn3.running_var', 'bn3.num_batches_tracked', 'fc4.weight'])

model.load_state_dict(checkpoint)

Actually, by executing model.prune(sparsity, False), it changes the keys names and adds additional layers, that’s why the load_state_dict doesn’t work properly.

model = MNISTQATModel(quantization_bits, quantization_bits)
model = model.to(device)
model.prune(sparsity, False)`
model.state_dict().keys()
# Output
> odict_keys(['fc1.weight_orig', 'fc1.weight_mask', 'bn1.weight', 'bn1.bias', 'bn1.running_mean', 'bn1.running_var', 'bn1.num_batches_tracked', 'fc2.weight_orig', 'fc2.weight_mask', 'bn2.weight', 'bn2.bias', 'bn2.running_mean', 'bn2.running_var', 'bn2.num_batches_tracked', 'fc3.weight_orig', 'fc3.weight_mask', 'bn3.weight', 'bn3.bias', 'bn3.running_mean', 'bn3.running_var', 'bn3.num_batches_tracked', 'fc4.weight_orig', 'fc4.weight_mask'])

I noticed that state_dict.pt file is old, so I highly recommend you to proceed with option 1 because you won’t get the same performances.

Please let me know if the issue persists.

Thanks!