Can anyone point out the code which implements the logic for applying (torch operators) torch.conv2d , torch.reshape, matmul etc…in ciphertext?
Hi @Rish,
The ciphertext manipulations are abstracted away from the user much lower in the stack. Concrete ML essentially converts any ML model to its quantized / integer representation which is required to work with FHE. Those integer operations can be found in quantized_ops.py.
Once we have the integer representation (integers parameters + integers arithmetic) we call the Concrete library which compiles the graph represented by the python code we built to FHE operations.
Thankyou @jfrey , I was also curious about :
torch.concat, torch.cat
torch.nn.functional.pad
torch.nn.Unfold
,Can we do these operations in ciphertext , or it is just restricted to plain text?
Yes, in theory we can perform these tensor manipulations easily in
FHE, but there are some caveats, especially with the concat
operator.
In FHE, we work with low precision and scale floating-point values for each layer differently to maximize precision. This make it harder to concatenate tensors with different scaling factors.
So, concatenating tensors that are outputs from the same layer (with identical scaling) works fine. However, concatenating outputs from different layers is more complex due to potentially different scaling factors.
Got it ! thankyou I think the unfold and padding are realized using table look up
thankyou I think the unfold and padding are realized using table look up
There is no table look up in such operation. Tensor manipulation are just moving pieces of the tensor around. That’s a “free” operation in FHE.
But to be able to pad zeroes to ciphertext meaning produce a ciphertext that returns encryption of padded zeroes of an encrypted input tensor, how do you do( this non linear op in FHE) without table look up?
A tensor in Concrete ML contains LWE samples, thus each sample can be processed upon individually.
Creating an encrypted zero (or constant) with no noise is possible: simply setting the ciphertext mask to 0 and messge bits to the constant. This value is then further computed upon by the conv operator, combining it with other encrypted values. No PBS is needed.
Thankyou @andrei-stoian-zama for the answer, was little confused about how we could pad zeroes around an encrypted tensor
Please help me understand this , setting mask as zero will give deterministic encryption of an LWE sample , but how do we pad zeroes around the encrypted tensor , are we concatenating zeroes around the encrypted tensor to form a tensor which is an encryption of padded input.
more specifically * torch.nn.functional.pad
is it possible to apply this operator on encrypted input?
Yes, you can apply pad on the encrypted inputs. Only the padding values are “trivial” encryptions of 0. These values are summed with actual client-provided encrypted values by the conv operator that usually follows the pad operator.