I am using the latest Concrete ML Version. I used the docker command to install it :
docker run --rm -it -p 8888:8888 -v /host/path:/data zamafhe/concrete-ml
Yes you are right. This n_bits parameter basically defines the precision of your features.
For n_bits = 2, the entire input will be split into 2^{\text{n_bits}} values. So in your case you are mapping your entire training set to 2^{2} = 4 different values evenly distributed between the min and max values.
A few solution that can help:
standardize your data: in your notebook, your features have very different scales so the quantization is completely missing the features with small range. You can try something like this to have your feature on the same scale:
Another good news is that in the library which is due for early January, we’ll have no more programmable bootstrapping (PBS) in linear models (like, LogisticRegression) so we’ll be able to have model_fhe.n_bits which is much larger. Also, these models will be crazy fast, since PBS is usually the bottleneck.