Can model inference only work on certain device (no network)?

Can model trained on concrete.ml be deployed on ipad even though without network connection technically? If the answer is yes, does it mean that both server.zip and client.zip have to be deployed on the same device? My goal is to make model inference only working on the certain device(s).

Hello @w_m ,
Could you specify a bit more what you are trying to achieve ? I’m not sure I understand your question.

In the mean time if you need more information about deployment with Concrete ML, and more specifically regarding the client.zip and server.zip files, I invite you to take a look at this documentation section.

Thanks !

Hi @RomanBredehoft Thanks for the reply and appologies for the confusions since I have multiple questions. List the questions one by one:

  1. The original idea is to prevent the model from being theft. i.e. It cannot be copied and reused.
  2. However, the deployed software has to be on a local device such as ipad and model inference has to work offline (i.e. no network). It means that server.zip and client.zip have to be deployed? In this case, does the deployment is still secured and can meet requirement in item 1?
  3. If item 2 doesn’t work, is there any other ways for concrete.ml to achieve item1 but all deployment has to be in a local environment?

Let me know if any explanations still needed. Thanks.

Hello again @w_m ,
Thanks for detailing your question ! So looking at what you’ve said, I see two different possible scenari :

  1. you have a model and you want to : encrypt the weights, send them to a local device and run the model on prem using clear data and encrypted weights. In this scenario, you want to be able to run a model locally without leaking the weights
  2. you want to encrypt the stored model on the local device, and then decrypt it in RAM when running it. In this scenario you protect the model from attacks when it’s stored

Could you confirm which scenario fits best to your use case ?

Hi @RomanBredehoft ,
Thanks for your reply.
As I am not 100% sure which could achieve my requiremernts… I hope that the 2 goals below should be met:

  1. The whole inference system including the model has to work in the same local device.
  2. Avoid the model (weights) being copied and reused somewhere else (whatever how to do it as long as item1 met).

To achieve both goals above, can the model be encrypted and stored on local device to do the inference (so it meets my item1)? But even it’s feasible, not sure how to meet my item2?
Thanks.

Hello @w_m

“Avoid the model (weights) being copied and reused somewhere else” is the definition of a DRM (digital right management). FHE does not allow that, and DRM in general is quite a difficult subject.

What you can do to protect your model is having some layers done in FHE on a server (which is trusted by the model owner). These layers will not be copy-able. And the privacy of user’s data will be maintained thanks to FHE. Look at our Hybrid Models in the doc, this is exactly what they are used for. But having things on a server is not in your setting, is it?

Even if we encrypt weights with Concrete ML (which we don’t do today), it will not prevent the FHE model to be copied and run elsewhere: indeed, by definition, the FHE computations can be done by anyone, so they can be copied to another machine. (On the other hand, encrypting the weights protect the weights, yes)

What is usually done in DRM is to tie the execution with the local machine, such that it doesn’t work on another machine. In practice, it’s very hard to resist to motivated attackers which try to remove these protections.

Hope it helps

@benoit Ok understood. It’s a pity I cannot apply concrete.ml in this case. I’ll look for other solutions. Thanks~

Yes, DRM is another subject, it’s very different. You can try to find available solutions on the market, but I’m afraid it’s very complicated to find something really solid. Cheers

Ok. will do. Thanks again :slight_smile:

1 Like