Is there way to use concrete ml in a client/server mode [from Discord]

Is there way to use concrete-ml in a client/server fashion at the moment? Have the client encrypt, send, and the server compute and send back.

What I actually need to see the size of the encrypted input that the client is sending to the server. Is there a way to do that?

Yes, we have ways to deploy quite easily, see Production Deployment - Concrete ML

And notably the examples:

For the “size of the encrypted input”, let me ask someone else from the ML team

Thanks <@902820351577313280>, that would be great

Hello <@930993161034149948> , as <@902820351577313280> mentioned all the tools needed to deploy in a client/server fashion are available.

About the size of ciphertexts there is an example on how to evaluate the size of them in cell 10 of the notebook about ClientServer.
You simply have to evaluate the len(serialized_ciphertext) to have the size of it in bytes (Deploy - Concrete).

Still in early stages, we also provide some tools to deploy any Concrete ML model to AWS using some boto3 scripts in the Deployment notebook. If you want to deploy using docker we also provide a basic FastAPI server file with a dockerfile. You can see some examples in the deployment folder of our use-case-examples.