ValueError when compiling

Hey! I am new to FHE so I want to excuse myself in advance in case I did sth really stupid here. I want to implement kernel ridge regression prediction using concrete numpy. The Ridge class in concrete ml is unfortunately not sufficient for my case.

After implementing a somewhat dummy example:

I get an error when compiling the np.dot part:
Constant array([[<concrete.numpy.tracing.tracer.Tracer object at 0x7f10f6600dc0>,
<concrete.numpy.tracing.tracer.Tracer object at 0x7f10f6605df0>,
<concrete.numpy.tracing.tracer.Tracer object at 0x7f10f658dd60>,
<concrete.numpy.tracing.tracer.Tracer object at 0x7f10f6594760>,
…
<concrete.numpy.tracing.tracer.Tracer object at 0x7f10f63ecbe0>]],
dtype=object) is not supported

The full error message is visible as a comment in the code.

I suppose the problem is in how the matrix K is constructed. But I am not sure what the problem is and if there is a simple way how do deal with this issue in concrete np. Many thanks in advance!

Hi @crypto_newbie,

The issue originates from the fact that you use np.asarray, and it’s not trivial to support np.asarray. The reason is when you call np.asarray, NumPy creates an array with dtype=object. Whereas tensors in Concrete Numpy are represented with a single tracer. We’re working on improving the operation support, and it might be possible in the future with python tricks, but for now, you can try to use cnp.array([some_encrypted_scalar, another_encrypted_scalar]) instead. It only accept scalars and it can only create vectors, but you should be able to flatten your input, use cnp.array and reshape it to the original shape you had.

Another note, np.dot is supported, but only between and encrypted and a clear value, not between encrypted values yet. Again, this will change in the future. Just wanted to give heads up :slight_smile:

Let us know if you have any more questions!

Thank you! I was able to fix the first issue exactly as you described. However, as suspected, I’m having trouble with the np.dot or K@alphas multiplication between encrypted values. Therefore, I implemented a simple double loop for matrix multiplication (the same GitHub link as above).

Unfortunately, I’m running into issues again, as I’m getting an error message that says “only integer operations are supported fhe_krr.py:53”. Also, I’m not sure if I’m compiling correctly. I need to tell the compiler which input is encrypted, but if I understand correctly, I need to quantize my input first, then compile and finally encrypt it for executing with FHE.

At the moment, I’m passing the unencrypted X_test values for compiling the function, which is not what I want. I hope I’m not causing too much trouble, but if this Kernel-ridge regression example works out, I’m willing to contribute a nicely documented tutorial for anyone to try!

1 Like

For the ValueError I think the solution is in two steps:

  • cast alphas to int. You can just add dtype=np.int64 to the ones_like initialization function.
  • set alphas as “clear” → you will be able to do the dot between your encrypted X_Train and the clear alphas.

Then you can use np.dot between X_train and alphas or X_Train @ alphas without any issue: you will be multiplying encrypted vector with a clear-vector.

Next, as @umutsahin warned, you will run into issues with multiplication between two encrypted values, because you compute dot between np.dot( X_train[i]-X_test[j],X_train[i]-X_test[j]). There’s a single way around this: to use a trick for multiplication:

prod_a_b = ((a+b)**2 // 4 ).astype(np.int64) - ((a-b)**2//4).astype(np.int64)

You will need to apply this trick to each cell of the gaussian matrix and in that case you will need to use two for loops as you did for alphas.

Let us know if you need more help ! And thanks for your proposition to write a tutorial, that would be awesome - that would be a first kernel method implemented with Concrete !

Thank you I will try to test this asap! just to make things clear:
Eventually we want the alphas to be encrypted, just like you would also encrypt the weights in a neural network. So just as for the kernel computation, we cannot use np.dot, right?

I wonder why we cannot use K@alpha though because I see that this is done in

Ok, I understand the idea of the model being encrypted as well, I had presumed it was not. The example from Concrete-ML that you show assumes the weights are in the clear.

So, for now, you need to use the trick for the multiplication and express the matrix multiplication with loops. It will work if the matrix sizes are small.

Though I can’t give an exact date, we will add encrypted x encrypted matrix multiplication quite soon.

Thank you! Of course, everything will be much easier when the new features are out. Looking forward to that!
Currently, the goal of the kernel ridge implementation is more proof of concept and less so efficiency. I have implemented matrix multiplication through a loop. However, at some point during the compilation, it crashes
(%2542 = multiply(%2278, %2541) # EncryptedScalar
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
fhe_krr.py:73)
I suppose it crashes at some point when multiplying A and B in line 73 (see original repolink)?

I looked a bit deeper into this, let’s go over the issues.

First let’s refactor the multiplication function since we’ll use it several times:


def mul_tlu(A, B):
    apbsq = ((A+B).astype(np.float64)**2 // 4).astype(np.int64)
    ambsq = ((A-B).astype(np.float64)**2 // 4).astype(np.int64)
    return apbsq - ambsq

Note I added a astype(np.float64) → this forces all the code between this cast and the one in astype(np.int64) to be fused to a table lookup.

You are trying to use an encrypted model, meaning the alphas and sigma are encrypted integers. However, the lookup table mechanism does not support multiple inputs. So you can not divide prod_A_B /sigma**2 since both operands are encrypted integers and division can only be performed with a lookup table.

The solution to this is to re-write the operation as a multiplication and use the multiplication function above:

inv_sigma_sq = (1 / 1) ** 2
div_sigma = mul_tlu(prod_A_B, inv_sigma_sq) # (sigma * 1.0) **2) )
c = np.exp( div_sigma ).astype(np.int64)

Note I added astype(np.int64) to get back to the integer domain after the exp.

You can replace the np.array(K) part with a simple initialization of K as zeros and indexing in it. In principle, you need to use the cnp.array function to build an array of encrypted values instead of np.asrray.

    K = cnp.zeros((X_train.shape[0],X_test.shape[0]))
....
            prod_A_B = np.sum(mul_tlu(A,B))

            div_sigma = mul_tlu(prod_A_B, inv_sigma_sq) # (sigma * 1.0) **2) )

            c = np.exp( div_sigma ).astype(np.int64)

            K[i,j] = c

Also your A*B multiplication was unsupported as both A and B are encrypted. You need mul_tlu again.

Sorry for the late reply!
thank you this helped a lot, I am able to compile computation of the gaussian matrix now written as

def mul_tlu(A, B):
    apbsq = ((A + B).astype(np.float64) ** 2 // 4).astype(np.int64)
    ambsq = ((A - B).astype(np.float64) ** 2 // 4).astype(np.int64)
    return apbsq - ambsq

def dot_product(A, B):
    len_A = A.shape[0] if hasattr(A, 'shape') else len(A)
    len_B = B.shape[0] if hasattr(B, 'shape') else len(B)

    if len_A != len_B:
        raise ValueError("The input vectors must have the same length.")

    result = 0
    for i in range(len_A):
        a = A[i]
        b = B[i]
        result += mul_tlu(a, b)

    return result


def gaussian_matrix(X_train, X_test, inv_sigma_sq):
    n_rows = X_train.shape[0]
    n_cols = X_test.shape[0]
    K = []
    

    for i in range(n_rows):
        row = []
        for j in range(n_cols):
            diff = X_train[i] - X_test[j]
            diff_sq = dot_product(diff, diff)
            div_sigma = mul_tlu(diff_sq, inv_sigma_sq)
            c = np.exp(-div_sigma).astype(np.int64)
            row.append(c)
        K.append(row)

    K = cnp.array(K)
    return K

with all input encypted. However the prediction function

def gaussian_predict(X_train,X_test,sigma, alphas):
    sigma = sigma.astype(np.int64)
    inv_sigma_sq = (1 / sigma) ** 2
    K = gaussian_matrix(X_train, X_test, inv_sigma_sq)
    
    alphas = alphas.astype(np.int64)
    
    y_predicted = dot_product(K, alphas)
    return y_predicted.astype(np.int64)

still causes problems (not pasting the full output here)

Traceback (most recent call last):
  File "fhe_krr.py", line 77, in <module>
    circuit = compiler.compile(inputset )
  File "/home/jan/miniconda3/envs/fhe/lib/python3.8/site-packages/concrete/numpy/compilation/compiler.py", line 439, in compile
    self._evaluate("Compiling", inputset)
  File "/home/jan/miniconda3/envs/fhe/lib/python3.8/site-packages/concrete/numpy/compilation/compiler.py", line 280, in _evaluate
    self._trace(first_sample)
  File "/home/jan/miniconda3/envs/fhe/lib/python3.8/site-packages/concrete/numpy/compilation/compiler.py", line 211, in _trace
    fuse(self.graph, self.artifacts)
  File "/home/jan/miniconda3/envs/fhe/lib/python3.8/site-packages/concrete/numpy/compilation/utils.py", line 60, in fuse
    fused_node, node_before_subgraph = convert_subgraph_to_subgraph_node(
  File "/home/jan/miniconda3/envs/fhe/lib/python3.8/site-packages/concrete/numpy/compilation/utils.py", line 569, in convert_subgraph_to_subgraph_node
    raise RuntimeError(
RuntimeError: A subgraph within the function you are trying to compile cannot be fused because it has multiple input nodes

   %0 = X_train                                 # EncryptedTensor<float64, shape=(5, 30)>
   %1 = X_test                                  # EncryptedTensor<float64, shape=(2, 30)>
   %2 = sigma                                   # EncryptedScalar<uint1>
   %3 = alphas                                  # EncryptedTensor<uint1, shape=(5,)>

So concrete is having trouble fusing a subgraph with multiple input nodes. I tried many different variants but could not find a solution yet.
Any suggestion is welcome!