Many NN architectures can be compiled to FHE, but the amount of work the developer of the NNs needs to do to achieve this compilation varies depending on the NN topology:
- Concrete Numpy provides tools to convert neural networks implemented in Torch to FHE, but operator support is more limited. Look here for more details
- More generally you can do the conversion manually and implement an inference function with numpy and our FHE numpy primitives
In the second case, which allows much more freedom, you might need to do additional work (for example to implement convolution as shown here. In this approach you can, for example, implement any activation function (by building a lookup table).
The only hard limitation in this more general case is MaxPool. For now the max operation on a tensor, though it can be done in FHE, is not supported.