Hello,

I am trying to compute a function that will call a MultLookupTable of 10816 LookupTables of size 512, reorganize the obtained binary data along a convolution, convert this binary values to the corresponding integer and apply then 2304 LookupTables on this data. My input is an array of size (10816) with values in between (0, 511).

I have the following error :

```
/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/pygraphviz/agraph.py:1402: RuntimeWarning: neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.00636062 to fit
warnings.warn(b"".join(errors).decode(self.encoding), RuntimeWarning)
Traceback (most recent call last):
File "/home/usr/Documents/research/fhe/fhe_tt/hnp_main.py", line 208, in <module>
tprepross, tcompil, tencrypt, tcomput = mnist_pipeline(p, lr_path, lit_max,
File "/home/ntu/Documents/research/fhe/fhe_tt/hnp_main.py", line 72, in mnist_pipeline
circuit, inputset = hnp_concrete_multitable.compile_f(f, nexpressions[0], nlit[0], patches[0], True)
File "/home/ntu/Documents/research/fhe/fhe_tt/hnp_concrete_multitable.py", line 307, in compile_f
circuit = compiler.compile_on_inputset(np.array(inputset))
File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/np_fhe_compiler.py", line 279, in compile_on_inputset
self.eval_on_inputset(inputset)
File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/np_fhe_compiler.py", line 213, in eval_on_inputset
self._nodes_and_bounds = measure_op_graph_bounds_and_update(
File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/compile.py", line 415, in measure_op_graph_bounds_and_update
result = run_compilation_function_with_error_management(
File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/compile.py", line 142, in run_compilation_function_with_error_management
return compilation_function()
File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/compile.py", line 405, in compilation_function
return _measure_op_graph_bounds_and_update_internal(
File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/compile.py", line 305, in _measure_op_graph_bounds_and_update_internal
inputset_size, node_bounds_and_samples = eval_op_graph_bounds_on_inputset(
File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/common/bounds_measurement/inputset_eval.py", line 198, in eval_op_graph_bounds_on_inputset
first_output = op_graph.evaluate(current_input_data)
File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/common/operator_graph.py", line 204, in evaluate
node_results[node] = node.evaluate(curr_inputs)
File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/common/representation/intermediate.py", line 485, in evaluate
return self.arbitrary_func(*ordered_inputs, *self.op_args, **self.op_kwargs)
File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/tracing.py", line 453, in <lambda>
arbitrary_func=lambda x: x.flatten(),
AttributeError: 'list' object has no attribute 'flatten'
Process finished with exit code 1
```

My function to compile:

```
# patches_idx_block1 is the array of the position of the inputs needed for the convolution. It is of shape
# (nrows, ncols, kernel size ** 2) with here nrows = ncols = 12, kernel size ** 2 = 9.
# The input shape for this one is 26,26, ksize 3 , no padding (pad = 0) and a stride of 2.
# Each position retrieve the indexes of the input of the convolution used, i.e the first one would be [0,1,2,n,n+1,n+2,2n,2n+1,2n+2], with n = 26 here
def f(x):
y = table0[x].flatten()
num_filters = 16 # Number of filters of my convolution
num_bits = 9 # args.kernel_size_per_block[1]**2
in_block1 = [[] for _ in range(num_filters)]
for i in range(num_filters):
for j in range(num_bits):
in_block1[i].append([y[patches_idx_block1.flatten()[k + j]] for k in
range(0, patches_idx_block1.flatten().shape[0], num_bits)])
in_block1 = np.array(in_block1) # (nfilters, nbits, npatches_per_filter)
y_sparse = in_block1.transpose(1, 0, 2) # (nbits, nfilters, npatches_per_filter)
y_sparse = y_sparse.reshape(y_sparse.shape[0], -1).transpose() # (nfilters * npatches_per_filter, nbits)
x_bin = np.array([2 ** i * col for i, col in enumerate(y_sparse.transpose())]).transpose()
z = np.sum(x_bin, axis=1)
return table1[z]
```

I have generated my inputset with the following code :

```
# the values I use : npatches, nmax, expressions = 676, 9, 10816
for i in range(npatches):
input = np.array([np.random.randint(2 ** nmax, size=1) for _ in range(nexpressions)],
dtype=np.uint8)
input = input.squeeze()
inputset.append(input)
compiler = hnp.NPFHECompiler(f,
{"x": x}, # EncryptedTensor(Integer(64, False), input_shape)},
)
print(f"Inputset : ", np.array(inputset).shape)
circuit = compiler.compile_on_inputset(np.array(inputset))
```

Therefore, is the problem because of my inputset, or should I change my function to simplify it ? Can I compile multiple function into the compiler and run only one circuit ?

Thanks for any information about this error,

Tristan