I am working with the Concrete library, and I want to fix the TFHE parameters in an application, similar to how it is done in a the benchmark application NN-20/NN-50/NN-100 in Reference [1]. The TFHE parameters reported in Reference [1:1] (128 bits) are:
GLWE: k=1, N=4096, \sigma=2^{-62}.
LWE: n=938, \sigma=2^{-23}.
The default value for the Concrete Configuration parameter_selection_strategy
is “MULTI,” which uses multiple parameter sets within the same application, and the parameters are automatically searched by the Concrete optimizer, cannot be set manually.
My target
I try to set the parameters manually, same as the reported values in References [1:2], [2]. If that’s not possible, I would like to use a single set of parameters instead of multiple sets.
Here are my two attempts.
Attempt 1: parameter_selection_strategy="V0"
I added the following code in frontends/concrete/fhe/compilation/server.py
under Concrete’:
options.set_v0_parameter(glwe_dim=1, log_poly_size=12, n_small=938, br_level=2, br_log_base=10, ks_level=9, ks_log_base=2)
It seems set_v0_parameter
does not allow setting \sigma, and since Reference [1:3] did not report br_level
, br_log_base
, ks_level
, ks_log_base
, I set them arbitrarily.
I changed the parameter_selection_strategy
to "V0"
.
Result: Concrete runs without errors, but the inference results are completely incorrect.
Attempt 2: parameter_selection_strategy="MONO"
I switched the parameter_selection_strategy
to "MONO"
. It finds parameters for NN-20, NN-50, NN-100, but not for some networks like DeepCNN-X mentioned in Reference [2:1]. For instance, while it finds parameters for DeepCNN-20 under "MONO"
, it fails for DeepCNN-50 and DeepCNN-100 with a RuntimeError: NoParametersFound
. Moreover, even though the results are correct for DeepCNN-20 using "MONO"
, it is significantly slower than when using "MULTI"
.
Summary
- Is it true that using
parameter_selection_strategy="V0"
with parameters reported in some papers leads to incorrect results? - Does using
parameter_selection_strategy="MONO"
fail to find parameters for more complex models? - With
"MULTI"
, I noticed that even for the same neural network model, different inputs (i.e., differenttorch_inputset
parameters incompile_torch_model
) lead to different selected parameters. Is there a way to ensure the same set of parameters is reused for the same model, regardless of the input?
I. Chillotti, M. Joye, and P. Paillier, “Programmable Bootstrapping Enables Efficient Homomorphic Inference of Deep Neural Networks,” in Cyber Security Cryptography and Machine Learning, S. Dolev, O. Margalit, B. Pinkas, and A. Schwarzmann, Eds., Cham: Springer International Publishing, 2021, pp. 1–19. doi: 10.1007/978-3-030-78086-9_1. ↩︎ ↩︎ ↩︎ ↩︎
Prasetiyo, A. Putra, and J.-Y. Kim, “Morphling: A Throughput-Maximized TFHE-based Accelerator using Transform-domain Reuse,” in 2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Mar. 2024, pp. 249–262. doi: 10.1109/HPCA57654.2024.00028. ↩︎ ↩︎