Fully connected layers connect all the output of the previous layer to each neuron. This will likely need an accumulator above 7 bits of precision. Remember that, if you use 2 bits of precision in *compile_torch_model* you will only be able to have 14 connections per neuron before it starts to overflow.

The number of connection (in theory) allowed per neuron is:

\Omega = \mathsf{floor} \left( \frac{2^{n_{\mathsf{max}}} - 1}{(2^{n_{\mathsf{weights}}} - 1)(2^{n_{\mathsf{inputs}}} - 1)} \right) where n_{max} = 7 is our current limitation in FHE. n_{weights} and n_{inputs} are the number of bits you use for the weights and inputs respectively.

A possible solution is to use a dimension reduction method on the input before compiling (i.e. PCA, autoencoder) and then take the transformed input and make sure the width of your fully connected layers dos not exceed \Omega.

More information about this are available Computation With Quantized Functions — Concrete Numpy Manual.