Can I compile and execute a torch model with several outputs in FHE?

Currently, directly compiling a torch model with a multi-output inference using one of the compile_torch_model, compile_onnx_model or compile_brevitas_qat_model functions will fail.

However, there are several alternatives that can make this work. At the end of the forward pass, one can:

  • concatenate all three results into a single array, if they share similar dimensions
  • flatten the three results and then use the concatenate trick
  • manually create a big enough tensor full of zeros and manually write the values in the desired indexes (t[0,i] = result_0[i])

In all cases, once the inference is executed in FHE, the three results will have to be manually recovered from the single output in the clear. For the first solution, it can just be done by accessing them using [0], [1], [2] indexes. In the other ones, they will need to be reconstructed with the proper dimensions.

Ultimately, this is a feature that will be available in a future release !