N_bits evaluation for Post-Training Quantization

Hi,
I am testing the library to build FHE-compliant models using several fitted scikit-learn models.
I am wondering how to correctly evaluate the n_bits parameter. What aspects should I consider?

For a test, I set n_bits to 14, but all the predictions of my concrete binary problem classifier are wrongly 0.

Thank you!

Hello!

Can you share some code? What model and what kind of dataset?

Of course.
I published all in this repository: GitHub - CardoFlare/zama-concrete.ml-issue

The dataset is a subset for security reason but is representative of the entire dataset. The model is a XGBoostClassifier for binary classification 0 or 1 .

Problem: the accuracy between the concrete model (fhe enabled or disabled) and the original model (without importing in Concrete) is extremely low.

Note: I recommend using python versione 3.10.15 and creating a virtual environmente to tdo a test in the same development environment.

Thank you in advance!