How to qualitatively evaluate computational complexity of FHE?

Let’s take QGPT2Evaluate.ipynb, including Multi-Head Attention GPT-2 Model, as an example.
I have two methods to attempt analyze the computational complexity of FHE: quantitatively analyze the actual execution time, and qualitatively evaluate the computational complexity in theory(my doubt).

1. Quantitatively analyze the actual execution time

1.1 Execute it without FHE

proj_12_heads_qgpt2.set_fhe_mode(fhe="disable")

t1 = time.time()
output_logits_clear = proj_12_heads_qgpt2(input_ids).logits
print(f"Time of inference without Multi-Head Attention Model(fhe=\"disable\"): {(time.time() - t1):.2f}")

1.2 Execute it with FHE

circuit_12_heads = proj_12_heads_qgpt2.compile(input_ids) 
proj_12_heads_qgpt2.set_fhe_mode(fhe="execute")

t2 = time.time()
output_logits_executed = proj_12_heads_qgpt2(input_ids).logits
print(f"Time of inference with Multi-Head Attention Model(fhe=\"execute\"): {(time.time() - t2):.2f}")

Therefore, we can qualitatively analyze the complexity of FHE by comparing the two times above.
However, “Execute it with FHE” may take long time to run.

So, I’m curious if there is a quantitative theory to analyze the computational complexity of this Multi-Head Attention Model in QGPT2Evaluate.ipynb.

2. How to qualitatively evaluate the computational complexity of FHE in theory?

Thank you very much. I would greatly appreciate any information you can provide.

In the circuit stats, there is a field ‘complexity’:

{'size_of_secret_keys': 935096,
 'size_of_bootstrap_keys': 8946549152,
 'size_of_keyswitch_keys': 15018223432,
 'size_of_inputs': 268443640,
 'size_of_outputs': 31457760,
 'p_error': 9.921069967253719e-07,
 'global_p_error': 0.005811449593052084,
 'complexity': 18031136040838.0,
 'programmable_bootstrap_count': 18471,

This is an estimation for the cost of executing the FHE.circuit. It contains the estimated number of basic operations (addition and multiplication) in all keyswitch and bootstrap operations.

1 Like

To get the statistics that @Rudy_Sicard mentioned, in the notebook, right after the compile() call you can do:

proj_single_head_qgpt2.q_attention.q_module.circuit.statistics

Maybe you can have a look at the hybrid model which also tackles the gpt2 model and benefits from a much better abstraction / API using concrete-ml → https://github.com/zama-ai/concrete-ml/tree/main/use_case_examples/hybrid_model

1 Like

Thank you, @Rudy_Sicard, for providing the estimation information of the complexity of FHE.circuit.

Thank you, @jfrery.
I sincerely appreciate your assistance.
Your answer has been tremendously helpful with the assessment of complexity for various neural network models using FHE.

2 Likes