MLIR for concrete ML models

we get the following after compiling a function in concrete (both DAG and mlir)
say for example an addition circuit

Computation Graph

%0 = x # EncryptedScalar ∈ [0, 7]
%1 = y # ClearScalar ∈ [0, 7]
%2 = add(%0, %1) # EncryptedScalar ∈ [0, 14]
return %2

MLIR

module {
func.func @main(%arg0: !FHE.eint<4>, %arg1: i5) → !FHE.eint<4> {
%0 = “FHE.add_eint_int”(%arg0, %arg1) : (!FHE.eint<4>, i5) → !FHE.eint<4>
return %0 : !FHE.eint<4>
}
}

but for concrete Ml models we donot see the MLIR( below is compilation from LR models)

Computation Graph

%0 = q_X # EncryptedTensor<int8, shape=(1, 30)> ∈ [-128, 127]
%1 = [[ -19] [ … 8] [ -11]] # ClearTensor<int8, shape=(30, 1)> ∈ [-128, 127]
%2 = matmul(%0, %1) # EncryptedTensor<int17, shape=(1, 1)> ∈ [-33762, 31808]
%3 = sum(%0, axis=1, keepdims=True) # EncryptedTensor<int11, shape=(1, 1)> ∈ [-362, 545]
%4 = -16 # ClearScalar ∈ [-16, -16]
%5 = multiply(%4, %3) # EncryptedTensor<int15, shape=(1, 1)> ∈ [-8720, 5792]
%6 = subtract(%2, %5) # EncryptedTensor<int17, shape=(1, 1)> ∈ [-32802, 29088]
%7 = [[371]] # ClearTensor<uint9, shape=(1, 1)> ∈ [371, 371]
%8 = add(%6, %7) # EncryptedTensor<int16, shape=(1, 1)> ∈ [-32431, 29459]
return %8

Optimizer


Statistics

Hello @Laser_beam ,
I’m not sure I understand your problem, are you trying to access a Concrete ML model’s MLIR ?

If so, you basically have two options :

  • either compile with the show_mlir option (model.compile(x_train, show_mlir=True)), which prints the MLIR once the compilation is done
  • or, print / save it manually after compiling the model as following :
with open("mlir.txt", "w") as mlir:
    mlir.write(model.fhe_circuit.mlir)

Hope that helps !

1 Like

Thankyou@Roman , I will try that