-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I inference trained model using c++ with gpu? #5854
Comments
Can anyone help me? |
Anyone has any idea on this? Do we support GPU inference or not? |
Thanks for using LightGBM. I believe that LightGBM does not currently have GPU-accelerated prediction, and that only training-related workloads run on the GPU. @shiyu1994 could confirm. If you're finding that LightGBM's existing prediction routines on CPU are not fast enough for your application and are looking to improve performance, you could also explore these projects that allow serving LightGBM models:
|
This issue has been automatically closed because it has been awaiting a response for too long. When you have time to to work with the maintainers to resolve this issue, please post a new comment and it will be re-opened. If the issue has been locked for editing by the time you return to it, please open a new issue and reference this one. Thank you for taking the time to improve LightGBM! |
I am loading a pre-trained lightgbm model and try to inference using GPU:
LGBM_BoosterPredictForMat(_handle, _agg_features.data(), C_API_DTYPE_FLOAT32, 1, _input_channels * _agg_feature_dim, 1, C_API_PREDICT_NORMAL, 0, -1, "device_type=gpu", &out_len, out.data());
However, I found it still using cpu to inference, is it the right way using GPU?
Summary
Motivation
Description
References
The text was updated successfully, but these errors were encountered: