-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimisations for Apple Silicon #3606
Comments
What do you mean by "support"? I believe LightGBM can be compiled on Apple Silicon. If compilation fails, could you kindly provide any logs from your machine? |
Hi, by support I mean not only compiling it but enable using every ML capabilities, so the 8 cores GPU, the two neural accelerators and the 16 cores Neural Engine for model training, so not using Cuda but a totally different approach. Tensorflow already has an alpha version available for testing but seems far to be working perfectly in any situation for the moment, and Apple helps google on this task so it doesn't seem trivial. Anyway recompiling it for CPU is a first step. Not sure if can be done as for instance numpy cannot be recompiled like this, it totally fails. Anyway I will try. |
Thanks a lot for your fast response! To be honest, I don't think that this kind of support (Mac-optimized version actually) will be implemented in the near future. At least, not by our small maintaining team without any help from the outside. Our current GPU implementation is far from perfect in terms of GPU utilization #768 (comment) and new CUDA implementation kindly contributed by IBM folks has some bugs that don't allow us even to announce it. So, I think this "general" GPU issues have more priority. Also, please note that Mac-optimized TensorFlow version are developing by Apple and Tensorflow-team is planing just adding that version as community-supported:
|
Linking dmlc/xgboost#6408 here. |
@StrikerRUS Thanks for your answer. Anyway if it can at least work in standard CPU mode it could already be great. I will try to compile when I will have time and let you know. |
@danbricedatascience I think the "ML engine" is better for neural networks, not for decision trees. |
@guolinke yes maybe it's a pure linear algebra unit, like a TPU, but this is the same for GPUs, it's pure SIMD units mostly used for vector computing in ML. I don't know how LightGBM or the others Gradient Boosting are using GPUs but they are, so at least if it cannot use the M1 Neural Engine, it can maybe use its GPU ? This 2.2 TFlops unit that has nothing to do with Intel integrated GPUs, check benchmarks. Apple says here: "Until now, TensorFlow has only utilized the CPU for training on Mac. The new tensorflow_macos fork of TensorFlow 2.4 leverages ML Compute to enable machine learning libraries to take full advantage of not only the CPU, but also the GPU in both M1- and Intel-powered Macs for dramatically faster training performance. This starts by applying higher-level optimizations such as fusing layers, selecting the appropriate device type and compiling and executing the graph as primitives that are accelerated by BNNS on the CPU and Metal Performance Shaders on the GPU." Again, this is for Tensors computing as used in TF, but it could be interesting to check if the type of GPU acceleration used in Gradient Boosting packages like LightGBM can also be applied here. |
Closed in favor of being in #2302. We decided to keep all feature requests in one place. Welcome to contribute this feature! Please re-open this issue (or post a comment if you are not a topic starter) if you are actively working on implementing this feature. |
When does LightGBM is (roughly) expected to support Apple Silicon ?
The text was updated successfully, but these errors were encountered: