Is there a way to stack your privacy preserving techniques with OTHER techniques to protect the training set?

We have saw that the FHE techniques are able to protect what is being run inside the model.
Anyway I do not see how this would provide protection against other types of attack, like for example identifying the training set using the fact that a ML usually overfits or slightly overfits the training set.
Would it be possible to stack the FHE tehcniques with other privacy preserving techniques used in scaled frameworks like federated learning?

1 Like

FHE is mainly used to protect users’ private data during inference. Data that might have leaked within the model parameters at training time (e.g. through overfitting) is still at risk with FHE. A malicious user could make adversary queries to try to extract training data information. These kind of attacks (model inversion, membership inference, …) will need to be tackled separately with other privacy enhancing techniques like differential privacy or secure mutli-party computation, which might be better to address the problem during training.

Federated Learning with FHE is a promising research direction that we are currently exploring. It might help to reduce the risk of overfitting by distributing the training across different parties but I don’t think it will prevent it (or you will need to enforce a more specific strategy for gradient accumulation / weights update).

So yes, it is possible to stack FHE with other privacy preserving techniques. Either by combining them completely independently of each other (e.g. training with differential privacy and inference with FHE) or you can build a more integrated solution that combines them both (e.g. federated learning in FHE).

1 Like