How to prove that objects and computations are encrypted?

What methods can be used to demonstrate that encrypted operations, models, and output files from concrete/concrete-ml are actually encrypted? Similarly, how does one demonstrate that no clear text files have been created and left behind after compiling or key creation?

This is a very interesting question. Let me gather here internally what would be the best answer.

In general, to verify that any software that uses encryption does so without leaking any data, the best way would be to make an audit by a specialized cybersecurity/crypto team. Our code is open-source so anyone could audit it. In a more simple manner, some of our demos show the serialized encrypted data, which is performed as such:

# suppose client is the FheModelClient corresponding to a Concrete ML model that was 
# saved with FheModelDev: see https://docs.zama.ai/concrete-ml/guides/serialization

# emb_x is a numpy float vector
encrypted_x = client.quantize_encrypt_serialize(emb_x) # encrypt a
encrypted_quant_tokens_hex = encrypted_x.hex() # show ciphertext as hexadecimal
print(encrypted_quant_tokens_hex) # should look like random bits

An audit does makes sense. Do you have recommendations for continuous software supply chain security when leveraging the concrete-ml repo ? Assuming that containers like zamafhe/concrete-ml also need auditing, what do you recommend as best practice for security of zama containers?

Concrete ML stands on Concrete (ie, all the crypto is done there) so I would say, as long as the encryption is well done in Concrete, you’re safe in Concrete ML. But yes, if you want to have another audit of Concrete ML, it can’t be a bad thing.

And regarding the audits: we Zama don’t make recommendations and stay neutral.
Cheers