Hi,
I urgently need to run XGBoost with GPU. However, the example page does not tell how to enable GPU:
I wonder if there is any easy way to enable GPU for this XGBoost example.
Hi,
I urgently need to run XGBoost with GPU. However, the example page does not tell how to enable GPU:
I wonder if there is any easy way to enable GPU for this XGBoost example.
Thanks for your reply.
I managed to run XGBoost with CUDA (my GPU is RTX 4090). I found a dataset’s model where the GPU version performs much worse than the CPU version:
This dataset has 64 features and 10 classes. If I run this with Concrete ML w/o GPU, the inference time is as follows:
Without GPU: 222.72s
With GPU: 231.42s
I’d like to understand why the one with GPU is even slower than the one without GPU. What are all possible reasons behind this strange phenomenon? If you know the internals on TFHE-rs, please share your ideas.
Only large models with large layers of non-linear operations can effectively take advantage of the GPU. Commonly tree-ensembles perform a few hundreds or thousands of comparisons depending on the number of trees and their depth. We measured speed-ups only for models with tens of thousands of these operations in a single layer (large CNNs).
Moreover, on laptop-grade GPUs there may not be any speed-up, as explained on the documentation page I linked above. I would suggest you perform the same benchmark on a H100 GPU.