Compilation of of lot of lookup tables


I am trying to compute a function that will call a MultLookupTable of 10816 LookupTables of size 512, reorganize the obtained binary data along a convolution, convert this binary values to the corresponding integer and apply then 2304 LookupTables on this data. My input is an array of size (10816) with values in between (0, 511).

I have the following error :

/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/pygraphviz/ RuntimeWarning: neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.00636062 to fit

  warnings.warn(b"".join(errors).decode(self.encoding), RuntimeWarning)
Traceback (most recent call last):
  File "/home/usr/Documents/research/fhe/fhe_tt/", line 208, in <module>
    tprepross, tcompil, tencrypt, tcomput = mnist_pipeline(p, lr_path, lit_max,
  File "/home/ntu/Documents/research/fhe/fhe_tt/", line 72, in mnist_pipeline
    circuit, inputset = hnp_concrete_multitable.compile_f(f, nexpressions[0], nlit[0], patches[0], True)
  File "/home/ntu/Documents/research/fhe/fhe_tt/", line 307, in compile_f
    circuit = compiler.compile_on_inputset(np.array(inputset))
  File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/", line 279, in compile_on_inputset
  File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/", line 213, in eval_on_inputset
    self._nodes_and_bounds = measure_op_graph_bounds_and_update(
  File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/", line 415, in measure_op_graph_bounds_and_update
    result = run_compilation_function_with_error_management(
  File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/", line 142, in run_compilation_function_with_error_management
    return compilation_function()
  File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/", line 405, in compilation_function
    return _measure_op_graph_bounds_and_update_internal(
  File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/", line 305, in _measure_op_graph_bounds_and_update_internal
    inputset_size, node_bounds_and_samples = eval_op_graph_bounds_on_inputset(
  File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/common/bounds_measurement/", line 198, in eval_op_graph_bounds_on_inputset
    first_output = op_graph.evaluate(current_input_data)
  File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/common/", line 204, in evaluate
    node_results[node] = node.evaluate(curr_inputs)
  File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/common/representation/", line 485, in evaluate
    return self.arbitrary_func(*ordered_inputs, *self.op_args, **self.op_kwargs)
  File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/numpy/", line 453, in <lambda>
    arbitrary_func=lambda x: x.flatten(),
AttributeError: 'list' object has no attribute 'flatten'

Process finished with exit code 1

My function to compile:

# patches_idx_block1 is the array of the position of the inputs needed for the convolution. It is of shape
# (nrows, ncols, kernel size ** 2) with here nrows = ncols = 12, kernel size ** 2 = 9. 
# The input shape for this one is 26,26, ksize 3 , no padding (pad = 0) and a stride of 2.
# Each position retrieve the indexes of the input of the convolution used, i.e the first one would be [0,1,2,n,n+1,n+2,2n,2n+1,2n+2], with n = 26 here
def f(x):
    y = table0[x].flatten()
    num_filters = 16 # Number of filters of my convolution
    num_bits = 9 # args.kernel_size_per_block[1]**2

    in_block1 = [[] for _ in range(num_filters)]
    for i in range(num_filters):
        for j in range(num_bits):
            in_block1[i].append([y[patches_idx_block1.flatten()[k + j]] for k in
                                  range(0, patches_idx_block1.flatten().shape[0], num_bits)])
    in_block1 = np.array(in_block1) # (nfilters, nbits, npatches_per_filter)
    y_sparse = in_block1.transpose(1, 0, 2)  # (nbits, nfilters, npatches_per_filter)
    y_sparse = y_sparse.reshape(y_sparse.shape[0], -1).transpose()  # (nfilters * npatches_per_filter, nbits)
    x_bin = np.array([2 ** i * col for i, col in enumerate(y_sparse.transpose())]).transpose()
    z = np.sum(x_bin, axis=1)
    return table1[z]

I have generated my inputset with the following code :

# the values I use : npatches, nmax, expressions = 676, 9, 10816
for i in range(npatches):
    input = np.array([np.random.randint(2 ** nmax, size=1) for _ in range(nexpressions)],

    input = input.squeeze()
compiler = hnp.NPFHECompiler(f,
                             {"x": x},  # EncryptedTensor(Integer(64, False), input_shape)},
print(f"Inputset : ", np.array(inputset).shape)
circuit = compiler.compile_on_inputset(np.array(inputset))

Therefore, is the problem because of my inputset, or should I change my function to simplify it ? Can I compile multiple function into the compiler and run only one circuit ?

Thanks for any information about this error,



I don’t see table0 and patches_idx_block1 definitions but it looks like one does not have the right type.
Is it defined with the LookupTable constructor ?

table0 = hnp.LookupTable(
     [ hnp.LookupTable(table_a),  hnp.LookupTable(table_b) ],
     [ hnp.LookupTable(table_c),  hnp.LookupTable(table_d) ],
patches_idx_block1 = np.array(
     [ hnp.LookupTable(table_a),  hnp.LookupTable(table_b) ],
     [ hnp.LookupTable(table_c),  hnp.LookupTable(table_d) ],

(MultiLookupTable has been replaced by LookupTable in v0.6.0)

Can you provide the full code if this suggestion is not working ?

Also assuming y is an encrypted vector of same size as x and patches_idx_block1 a set of table lookup . It should be

(patches_idx_block1.flatten()[k + j])[y]

instead of

y[patches_idx_block1.flatten()[k + j]]

And last if you want to use 9bits values (in [0, 511]), you need concrete-numpy v0.9.0 which is not public yet (in release candidate stage).

Thanks for your answer, I have changed my MultiLookupTable to a LookupTable. I will come back when my script will have finished compiling as it is quite long (>5hours). If it do not work I’ll try to reduce the lookuptable size to 8bits.
patches_idx_block1 is not a set of table lookup, it is an array of indexes. But it is a good idea to transform it to a lookup table indeed it may be faster, do you have some benchmarks on the speed of a table lookup vs the access of a value of an encrypted vector with an index ?

Finally, I saw that concrete-numpy 0.5 has a documentation of the numpy functions integrated, is there the same thing for v0.8 and/or the list of the changes between each version ?

Thanks for your answer

I think major compilation speed up were introduced with v0.6.0 (x20 speedup of certain cases) but I am not sure what are the API changes. This code is short so it’s worth trying the update.

Hello @tricycl3

for the list of numpy functions supported in 0.8, here is the link: Compatibility - Concrete Numpy

and if you want to see the changes between versions 0.6, 0.7 and 0.8, you could read this product announcement Announcing Concrete Numpy v0.8

Hello again,

So I have tried to change the MultiLookupTable to LookupTable I had the following error :

File "/home/usr/Documents/research/fhe/fhe_tt/", line 350, in lookuptable_for_multiple_blocks
    multi_lookup = hnp.LookupTable([lk for lk in lookups])
  File "/home/usr/Documents/research/fhe/fhe_tt/venv/lib/python3.8/site-packages/concrete/common/extensions/", line 26, in __init__
    raise ValueError(
ValueError: Desired lookup table has inappropriate number of entries (10816)

I define my LookupTable with the following function :

def lookuptable_for_multiple_blocks(folder, patches, nlit, op, num_blocks=2, exceptions=[]):
    """Create the lookup tables for each block

     A DNF is a boolean expression of OR of ANDs (is the form c1 | c2 | ... where c_i is a clause of AND )
     A CNF is a boolean expression of AND of OR
    folder : str, path of the folder with all the CNF/DNF
    patches: list, list of the number of patches created by the convolution per filter
    # here patches  = [676, 144]
    nlit: list, list of the number of literals max in each expression
    # here nlit = [8,8]
    op: str, determines if DNF or CNF format
    num_blocks: int, number of convolutional blocks
    exceptions: list of integers, filters to not take into account

    if op == '|':
        expr_type = "CNF"
        expr_type = "DNF"
    tables = []
    nexpressions = []
    for i in range(num_blocks):
        block_num = str(i)
        files = os.listdir(folder)
        block = f'block{block_num}'
        npatches_per_filter = patches[i]
        nmax = nlit[I]
       # getting the files with the right format
        expr_files = [f for f in files if expr_type in f and block in f]
       # here I order the files as I need
        all_expr = create_expr_for_folder(expr_files, npatches_per_filter, exception=exceptions)
        all_expr = all_expr.flatten().tolist()
        fpath = os.path.join(folder, all_expr[0].decode())
        # I load the expression of the first file to debug as loading everything is slow
        expression = load_expression(fpath, nmax)
        # table is a np.array of boolean values of size 2**8
        table = expression2table(expression, nmax, op)
        lookups = [hnp.LookupTable(table)]
        lookups = lookups * len(all_expr)
        # I commented the following block as it takes a bit of time
        # Create the lookuptables for each expression and tables
        for file in tqdm(all_expr):
            fpath = os.path.join(folder, file.decode())
            expression = load_expression(fpath, nmax)
            table = expression2table(expression, nmax, op)
        multi_lookup = hnp.LookupTable([lk for lk in lookups])
        print(f"Table {i} size : {len(lookups)}")

    return tables, nexpressions

Therefore I stayed with the hnp.MultiLookupTable as it worked in my case.

The initial issue was the .flatten() in f, so I removed it which solved this issue, with values of x between [0,255] . I do not understand why since it is supported thanks to @alex link (and I am in concrete-numpy 0.8). The compilation is now super fast (less than 100s !) which is better for debugging.

I saw that np.logical_or and other boolean operations are now available on concrete, is it possible between two encrypted tensor or still between a clear tensor and an encrypted one ?

Thank you for your awesome library

& and | operation are just between encrypted and clear for now, and they are converted to LookupTable under the hood.