Hi. I have a question is about the benchmark of concrete. It takes a really long time on my laptop and I was wondering if the results can be found somewhere. I want to use a scheme for the addition of floating point numbers. Is RLWE recommendable for this? If not, what schemes do you recommend? I have seen Paillier, and FAHE1, FAHE2, ElGamal, but I haven’t found any systematic review that compares them speed-wise.
What I want to do is send partially filled gradients (e.g. [ 0, 0, 0, 5, 1, 2, 0, 0 1, 0, 0, 5], with the biggest components being sent, while smaller values are kept locally) to a server where they should be added up (federated learning setting). I have thought of clustering the biggest components of the gradients and packing them into RLWE samples, but I am not sure if this is the most suitable option.
Let me know if there’s any suggestions.