Thanks, @yundsi , for your detailed reply. To be honest, my main codebase is quite lengthy and complex, so I created this simplified example to highlight the issue. In my implementation, I also have some other computations and functions that rely on these function results, but to avoid confusion, I excluded those details from this discussion. My earlier response mistakenly omitted the sum for the example, so here is the updated version of my code, which more closely resembles the real implementation:
Modular Approach:
@fhe.module()
class GeneralizedModule:
@fhe.function({"x": "encrypted", "y": "clear"})
def step1_func(x, y):
result = np.sum((x * y)).astype(np.int64)
return fhe.refresh(result)
@fhe.function({"x": "encrypted", "y": "clear"})
def step2_func(x, y):
result = np.sum(((1 - x) * y)).astype(np.int64)
return fhe.refresh(result)
Single-Function Implementation:
@fhe.module()
class GeneralizedSingleFunction:
@fhe.function({"x": "encrypted", "y": "clear"})
def combined_func(x, y):
product = x * y
output1 = np.sum(product).astype(np.int64)
output2 = np.sum((1 - product) * y).astype(np.int64)
return output1, output2
1. Using astype
to Prevent Errors
Regarding the point you raised in your previous answer, I consistently encountered errors when I removed the astype
function. Without astype
, the implementations failed to compile, producing the following error:
“Compilation Failed: Program cannot be composed (see Common errors | Concrete): At location HP_chunked_concrete.py:109:0: The noise of the node 0 is contaminated by noise coming straight from the input (partition: 0, coeff: 1000.00).”
The only change I made was the removal of astype
. It seems weird, but without astype
, the noise level seems to rise to a point that prevents successful composition. Although it doesn’t seem entirely logical at first glance, I haven’t found a workaround other than retaining astype
. It would be helpful if you have any insight or comment about the reason behind that.
2. Overhead in Modular Composition
I’ve tested fhe.Wired
and automatic module tracing based on the examples in the library documentation, but neither had a significant impact on computation time or size. The observations we discussed seem accurate: modular composition introduces overhead, and that likely explains why the single-function implementation is faster. I believe this reasoning is correct, but if you have any additional insights, I’d appreciate them.
3. The Impact of Chunking
I also tried to implement my model in different schemes to compare unified logic implemented across different FHE schemes (e.g., BGV, BFV). As you know, we have some limitations in setting the parameters in FHE before hitting the noise level, and in other FHE libraries, we need to set them manually. Based on this fact, in my implementations, I considered chunking over my data. I tried to maintain consistency in the concrete implementation by using a similar chunking method. However, I found that chunking has surprising effects on both computation time and ciphertext size. When I increase the size of the chunk (decrease the number of chunks), the time decreases, and the encryption size increases to a certain level, after which it stabilizes and does not change.
After observing this behavior, I tried to understand more about the parameters selected automatically behind the scenes in concrete to optimize the parameters and compile the code. So I tried to understand the limitations we have for input and, based on the documentation you had about common errors, I think it would be useful if I use show_bit_width_constraints=True
and understand the size of the input, as well as the limitations I have in defining the input chunk size and its optimal value. However, the result I got was confusing and not understandable to me. It was:
Bit-Width Constraints for all
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
%0:
all.%0 >= 1
%1:
all.%1 >= 1
%2:
all.%2 >= 1
all.%0 == all.%1
all.%1 == all.%2
%3:
all.%3 >= 1
all.%2 == all.%3
%4:
all.%4 >= 1
%5:
all.%5 >= 1
%6:
all.%6 >= 1
all.%5 == all.%2
all.%2 == all.%6
%7:
all.%7 >= 1
all.%6 == all.%1
all.%1 == all.%7
%8:
all.%8 >= 1
all.%7 == all.%8
%9:
all.%9 >= 1
%10:
all.%10 >= 1
all.%4 == all.%9
all.%4 >= 2
all.%9 >= 2
%11:
all.%11 >= 1
I would appreciate it if you could tell me how I can exactly see the parameters we usually use to set an FHE model, like polynomial moduli, plaintext moduli, and others, and if there are any limitations regarding the size of the input. How can I understand when I will hit the 16-bit size you mentioned as the input size limitation?
4. Matrix Operations and Supported Functions
As my input data resembles a database table, I’ve considered using matrices for some computations. I noticed np.matmul
is listed among the supported operations, and I also found this discussion(General NxN matrix multiplication - #6 by dalvi) on NxN matrix multiplication. In my code, I think using matrix formats might be advantageous in my computations. However, I’m unsure if concrete-python fully supports matrix operations. If you have any guidance or examples, I’d greatly appreciate it.
5. Process Killed Message
After running my code, I can compile it completely without any error, and the results are correct and accurate. However, every time I get this message as the last line of my output:
zsh: killed python name_of_my_code.py
I cannot understand what the reason for this is or if it’s something I should be concerned about.
Sorry, it became long and complex. Thank you in advance for your help.