The zama compiler lowering the FHE dialect to llvm ir and utilizes llvm infrastructure to generate the corresponding binary files.
So, why does it not choose to lowering the FHE dialect to the emitC dialect, outputting target c++ source code that calls TFHE library functions? In this way, the Zama frontend (concrete-python) could compile and link this c++ source file with the TFHE library using clang/gcc, also generating the corresponding binary files.
Why did zama choose the former approach over the latter? are there any considerations behind this?
I don’t think we considered this approach during the design of the compiler. The goal was to generate a binary file that can be executed, so we naturally went with LLVM/MLIR codegen. We did this so that you could compile and execute directly from your Python interpreter, with Concrete managing all the artifacts for you. So we didn’t consider using emitC.
If I understand correctly, using emitC, you could replicate the same features by compiling the output C++ code (but then, why would that be better than the current codegen we are using?). Or you could change the user experience, and return a C++ code and leave the rest for the user, and then I would ask myself: why would that be useful? I think it would make sense if some users would want C++ code to audit, tweak, and compile themselves, but not sure who would want that. Even if this was a real need, then I would see Concrete supporting both lowerings as the main feature is to be able to run everything in Python.
@ayoub has already given the main reason why we currently do not generate C/C++ source code: the ability to generate binary code without invoking additional compilation infrastructure. Also, there is a substantial amount of passes converting to the LLVM dialect, which means that the risk of ending up in a dead end on the lowering path when using a new operation from the existing dialects is relatively low with LLVM IR as the target. Finally, EmitC wasn’t yet integrated into MLIR when we started working on the compiler and hasn’t gained a lot of traction until recently.
Although EmitC certainly enables interesting use cases, there don’t seem to be any direct benefits for the current compilation flow. However, if you would like to explore this direction, I would recommend trying to pass the output of concretecompiler --action=dump-std
to mlir-opt
with the right options to generate IR in the EmitC dialect (e.g., --convert-arith-to-emitc
, --convert-func-to-emitc
, etc.; mlir-opt --help
provides a list of all EmitC-related conversion passes).
-
We did this so that you could compile and execute directly from your Python interpreter, with Concrete managing all the artifacts for you.
=>
By using EmitC to generate C++ code and leveraging the LLVM toolchain within Concrete to produce and execute the target files, the entire compilation and execution process can be wrapped in Python to provide the same interface. This way, Concrete can still manage all output artifacts. The generated C++ code serves only as intermediate code, with the final output remaining consistent with the current output from Concrete. -
why would that be useful?
=>
There may be some large applications where privacy protection is only a small part. Using Concrete, I can conveniently generate privacy-enabling code (.cpp) and then integrate this generated code into my larger project to compile and build everything together.
-
zama can directly generate binary code without the need to invoke additional compilation infrastructure.
=>
In the current implementation of Concrete, LLVM infrastructure is also used, with the only difference being that emitC adds an additional LLVM frontend, clang. Additionally, the C++ code generated using emitC needs to be compiled through a separate process, which differs from the current approach in Concrete. -
there is a substantial amount of passes converting to the LLVM dialect, which means that the risk of ending up in a dead end on the lowering path when using a new operation from the existing dialects is relatively low with LLVM IR as the target.
=>
In fact, the risk of lowering from LLVM Dialect to emitC is also relatively low.
You can still link to the shared object generated by the compilation. So if you have a larger system, you can use the compiled circuit as a library. We haven’t documented this much, but this is possible.
Hey @yuucyf indeed this is a good argument to be able to have emitC as backend/artifact.
As @andi says it is more or less already possible by dumping std
dialect then using mlir-opt
to call passes to transform the std
dump of the compiler to emitC
, than also means it should be straightforward to include that into our compiler to emit CPP code as an artifact.
As @ayoub says you could also directly link with shared lib (or static one which can also be generated by the compiler).
However in both case it is not enought IMO to integrate that into an external project as you should also compile the ConcretelangRuntime
to link with the CPP code, the ConcretelangServerLib
and ConcretelangClientLib
to manage encryption, serialization and pre/post processing of ciphertext both in client and server side.