FHE with adversarial machine learning

how I can use concrete-ml to construct FHE with adversarial machine learning

Hi A.ABBASS,

Thank you for your interest in Concrete ML.

To assist you effectively, we would like additional information.
Do you want to convert an existing neural network to FHE?
What type of attack would you like to handle?

Thanks !

i want to create a new adversarial deep-learning Model that can act against inference attacks where n fog nodes can train the model using secure multi-party computation and verifiable computation and Fully homomorphic encryption

can you assist me to convert GAN to FHE SCHEMA

Hello @A.ABBASS,

To convert a model to a friendly FHE model:
First, you must quantize your model, then make sure that the maximum precision of your circuit doesn’t exceed 16 bits. Because FHE only operates on integers with a precision limit of 16 bits.

1- For quantization, you have 2 options: you can use post-learning quantization (showed in this use-case) or quantization-aware learning (showed in this use-case):

You can use Brevitas library to quantize your layer. Here is an example showing how to convert a custom torch model into its Brevitas counterpart:

class TorchCustomModel(nn.Module):
    """A small network with Brevitas."""

    def __init__(self, input_shape, hidden_shape, output_shape):
        """Torch Model.

        Args:
            input_shape (int): Input size
            output_shape (int): Output size
            hidden_shape (int): Hidden size
        """
        super().__init__()

        self.linear1 = nn.Linear(input_shape, hidden_shape)
        self.linear2 = nn.Linear(hidden_shape, hidden_shape)
        self.linear3 = nn.Linear(hidden_shape, output_shape)

    def forward(self, x):
        x = torch.relu(self.linear1(x))
        x = torch.relu(self.linear2(x))
        x = self.linear3(x)
        return x

In Brevitas :

  class QuantCustomModel(nn.Module):
      """A small quantized network with Brevitas."""
  
      def __init__(
          self,
          input_shape: int,
          output_shape: int,
          hidden_shape: int = 100,
          n_bits: int = 5,
          act_quant=Int8ActPerTensorFloat,
          weight_quant=Int8WeightPerTensorFloat,
      ):
          """Quantized Torch Model with Brevitas.
  
          Args:
              input_shape (int): Input size
              output_shape (int): Output size
              hidden_shape (int): Hidden size
              n_bits (int): Bit of quantization
              weight_quant (brevitas.quant): Quantization protocol of weights
              act_quant (brevitas.quant): Quantization protocol of activations.
  
          """
          super().__init__()
          # The first layer is used to quantized the input data
          self.quant_input = qnn.QuantIdentity(bit_width=n_bits, act_quant=act_quant, return_quant_tensor=True)

          self.linear1 = qnn.QuantLinear(in_features=input_shape, out_features=hidden_shape, weight_bit_width=n_bits, weight_quant=weight_quant, bias=True, return_quant_tensor=True)
          self.relu1 = qnn.QuantReLU(return_quant_tensor=True, bit_width=n_bits, act_quant=act_quant)
          
          self.linear2 = qnn.QuantLinear(in_features=hidden_shape, out_features=hidden_shape, weight_bit_width=n_bits, weight_quant=weight_quant, bias=True, return_quant_tensor=True)
          self.relu2 = qnn.QuantReLU(return_quant_tensor=True, bit_width=n_bits, act_quant=act_quant)
  
          self.linear3 = qnn.QuantLinear(in_features=hidden_shape,  out_features=output_shape, weight_bit_width=n_bits,  weight_quant=weight_quant,  bias=True, return_quant_tensor=True)

      def forward(self, x):
          x = self.quant_input(x)
          x = self.linear1(x)
          x = self.relu1(x)
          x = self.linear2(x)
          x = self.relu2(x)
          x = self.linear3(x)
          return x.value

2- After the quantization step, you have to compile the model to check if your model is really FHE-compatible, if you have used Brevitas, you can compile it through : compile_brevitas_qat_model function. The function will through an error if your layers are not supported by the Concrete stack, or if the network is not properly quantized, or if the maximum bit-width exceeds 16-bits. Note that the smaller the bit-width, the faster the inference.

I recommend you to start by reading the Cifar use-case and if you can have any questions, feel free to reach out to us.

Thanks !

1 Like

CAN YOU HELP ME IN CONVERTING MY CNN MODEL TO the friendly using concrete-ml

(Attachment cGAN_MNIST_Tensorflow.ipynb is missing)

Hello @A.ABBASS, I believe we would need more information about your issue in order to be able to guide you :slightly_smiling_face:

HOW CAN I SUBMIT MY cGAN MODEL TO BE ABLE TO BE VIEWED BY YOU

Hello @A.ABBASS , we’ll unfortunately not able to do the conversion for you. We can assist the users, but can’t make all the work. So, as explained before in the thread:

  • try to make your task without FHE, ie purely in torch
  • then, try to convert it to FHE, by using Brevitas and its quantized ops

It’s really not a simple task, so you might want to familiarize with easier tasks and in particular, built-in models (see the doc). And when you start deep learning, have a look to the existing examples in the repo (eg, the CIFAR one), which follows the same method [torch then quantized ops]