Data encoding (inputset) and Floating points

Hi, I am planning to use concrete numpy for my project, but I do have a few questions.

  1. Like other fhe libraries, is it also possible to packed ciphertext in concrete numpy? Or are there any data encoding techniques that can be used?

  2. Is it really not possible to output floating points in concrete numpy?

  3. For ML models, is concrete numpy only limited to converting the predictor function to its fhe equivalent? Or is it also possible to convert any other function with floating point outputs?

Thank you!

1 Like

Hello Er_21!

Here are the answers to your questions :slight_smile:

  1. We don’t currently do ciphertext packing in Concrete Numpy, it may come at a later date however! Keep an eye out for changelogs :slight_smile: (we do support tensor inputs however not sure that’s what you are looking for)

  2. No it is currently not possible to output floating point values in Concrete Numpy, the reason being that we are using an exact paradigm over integers. However there are ways to use floating points in some cases for intermediate computations, check this documentation page. The gist is that as long as you have a subgraph with a single variable integer input and a single integer output (that can be created with .astype(numpy.int64) e.g.) it can be pre-computed in an integer table (as long as the integer values fit in the 7 bits integers we can use today).

  3. Yes we currently support converting functions taking integer inputs and have integer outputs (no floating point outputs or inputs). There are some features to help you quantize some simple NN in the currently release Concrete Numpy but we have some things brewing to make that easier for a wider range of NNs :slight_smile:

I mention quantization as it allows to perform computations over integers and then dequantize after the FHE computation is done to recover floating point numbers. Note that we have a small bit width for integers (7 bits) which means some networks will require work to make work with quantization over that number of bits.

If you have further questions don’t hesitate to ask them here!

EDIT: for 1. we do support using tensors as inputs

1 Like