Array slicing for splitting and concatenating layers

I am working on a project that involves text-based models, and was wondering how to use array slicing in Concrete ML (e.g. x[:,i,:]). The goal is to take each one-hot vector at the second index i, feed them through an embedding table and concatenate the output.

It trains with the slicing but fails to compile in brevitas.

Please let me know if it is possible to split and concatenate layer inputs/output in Concrete ML.

class TinyMLP(nn.Module):
	def __init__(self, n_neurons, n_bits) -> None:
		super().__init__()

		self.q1 = qnn.QuantIdentity(bit_width=n_bits, return_quant_tensor=True)
		self.fc1 = qnn.QuantLinear(27, 10, bias=False, weight_bit_width=n_bits)
		self.q2 = qnn.QuantIdentity(bit_width=n_bits, return_quant_tensor=True)
		self.fc2 = qnn.QuantLinear(30, n_neurons, bias=True, weight_bit_width=n_bits)
		self.q4 = qnn.QuantIdentity(bit_width=n_bits, return_quant_tensor=True)
		self.fc4 = qnn.QuantLinear(n_neurons, 27, bias=True, weight_bit_width=n_bits)

    def forward(self, x):
        """Run inference on the tiny CNN, apply the decision layer on the reshaped conv output."""
        x_pre = self.q1(x[:, 0, :])
        x_pre = self.fc1(x_pre)
        for i in range(1,self.n_blocks):
            x_pre = torch.cat((x_pre, self.fc1(self.q1(x[:,i,:]))), axis = 1)
        x = self.q2(x_pre)
        x = self.fc2(x)
        x = torch.relu(x)
        x = self.q4(x)
        x = self.fc4(x)
        return x

My Version

concrete-compiler               0.23.4
concrete-ml                     1.0.0
concrete-ml-extensions-brevitas 0.1.0
concrete-numpy                  0.9.0
concrete-python                 1.0.0

Error Message

Traceback (most recent call last):
  File "~Concrete/names/array_slicing.py", line 282, in <module>
    q_module = compile_brevitas_qat_model(nets[idx], x_train.float(), n_bits=idx+3)
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/ml/torch/compile.py", line 416, in compile_brevitas_qat_model
    q_module = compile_onnx_model(
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/ml/torch/compile.py", line 263, in compile_onnx_model
    return _compile_torch_or_onnx_model(
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/ml/torch/compile.py", line 125, in _compile_torch_or_onnx_model
    quantized_module.compile(
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/ml/quantization/quantized_module.py", line 576, in compile
    self.fhe_circuit = compiler.compile(
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/fhe/compilation/compiler.py", line 434, in compile
    self._evaluate("Compiling", inputset)
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/fhe/compilation/compiler.py", line 279, in _evaluate
    self._trace(first_sample)
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/fhe/compilation/compiler.py", line 207, in _trace
    self.graph = Tracer.trace(self.function, parameters)
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/fhe/tracing/tracer.py", line 77, in trace
    output_tracers: Any = function(**arguments)
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/ml/common/utils.py", line 1, in _clear_forward_proxy
    """Utils that can be re-used by other pieces of code in the module."""
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/ml/quantization/quantized_module.py", line 367, in _clear_forward
    output = layer(*inputs)
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/ml/quantization/base_quantized_op.py", line 236, in __call__
    return self.q_impl(*q_inputs, **self.attrs)
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/ml/quantization/quantized_ops.py", line 2155, in q_impl
    self.call_impl(*inputs, **attrs),
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/ml/quantization/base_quantized_op.py", line 623, in call_impl
    outputs = impl_func(*inputs) if not self._has_attr else impl_func(*inputs, **attrs)
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/ml/onnx/ops_impl.py", line 1942, in numpy_gather
    return (x[tuple(slices)],)
  File "~.conda/envs/concrete/lib/python3.9/site-packages/concrete/fhe/tracing/tracer.py", line 750, in __getitem__
    raise ValueError(message)
ValueError: Indexing with '0' is not supported

Hi @lfolkerts,

Could you please share some dummy input you provide to the compile_brevitas_qat_model method as well as the self.n_blocks value which is missing here?

Here is a github link with the full code, based on your CNN example.

I tried some debugging on my end, and running with an older version of concrete-ml, I got a an error message that may help. It may help pinpoint the issue.

concrete-compiler               0.23.4
concrete-ml                     0.6.1
concrete-ml-extensions-brevitas 0.1.0
concrete-numpy                  0.9.0
concrete-python                 1.0.0
Traceback (most recent call last):
  File "/home/lfolkerts/Projects/Masha/concrete-ml/baby_names/array_slicing.py", line 278, in <module>
    q_module = compile_brevitas_qat_model(nets[idx], x_train.float(), n_bits=idx+3)
  File "/home/lfolkerts/.local/lib/python3.10/site-packages/concrete/ml/torch/compile.py", line 349, in compile_brevitas_qat_model
    q_module_vl = compile_onnx_model(
  File "/home/lfolkerts/.local/lib/python3.10/site-packages/concrete/ml/torch/compile.py", line 253, in compile_onnx_model
    return _compile_torch_or_onnx_model(
  File "/home/lfolkerts/.local/lib/python3.10/site-packages/concrete/ml/torch/compile.py", line 110, in _compile_torch_or_onnx_model
    numpy_model = NumpyModule(model, dummy_input_for_tracing)
  File "/home/lfolkerts/.local/lib/python3.10/site-packages/concrete/ml/torch/numpy_module.py", line 59, in __init__
    self.numpy_forward = get_equivalent_numpy_forward(model)
  File "/home/lfolkerts/.local/lib/python3.10/site-packages/concrete/ml/onnx/convert.py", line 93, in get_equivalent_numpy_forward
    raise ValueError(
ValueError: The following ONNX operators are required to convert the torch model to numpy but are not currently implemented: Gather.
Available ONNX operators: Abs, Acos, Acosh, Add, Asin, Asinh, Atan, Atanh, AveragePool, BatchNormalization, Cast, Celu, Clip, Concat, Constant, Conv, Cos, Cosh, Div, Elu, Equal, Erf, Exp, Flatten, Floor, Gemm, Greater, GreaterOrEqual, HardSigmoid, HardSwish, Identity, LeakyRelu, Less, LessOrEqual, Log, MatMul, Max, MaxPool, Min, Mul, Neg, Not, Or, PRelu, Pad, Pow, ReduceSum, Relu, Reshape, Round, Selu, Sigmoid, Sign, Sin, Sinh, Softplus, Sub, Tan, Tanh, ThresholdedRelu, Transpose, Unsqueeze, Where, onnx.brevitas.Quant

Thanks for the details! First you can update to concrete-ml 1.0.0. This should fix the problem you are seeing.

EDIT: I read too fast. The second try with the older version was your attempt to debug. Got it. The problem is that Gather was not implemented in the 0.6 version thus the error.

Let me have a deeper look into your problem.

Hello again @lfolkerts,

Looks like you ran into a bug found in the numpy_gather method. We are going to patch this asap.

1 Like

Thanks @jfrery !! Please let me know when the patch exists.

Hi @lfolkerts,

concrete-ml 1.0.2 is out with the fix for your bug. Let us know how it goes!

1 Like

It works great now, thanks for the patch!!