-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA? #5
Comments
I was planning to look into this when I find the time, yes. I haven't done that before and it might take me some time to get an Yggdrasil recipe ready that compiles the CUDA library. After that it should be pretty straightforward. Perhaps it is also worth just writing the wrappers around the |
I'm not sure (because I don't know julia enough), but it seems to me that calling directly the gpu routines when the array is on gpu should be straigthforward. Is there a simple way to know if an array resides on GPU ? |
Yes, once the wrapper routines are implemented (I believe several arrays of the config, grid etc. need to be |
I could be misunderstanding, but I am fairly sure that you don't have to do this - CUDA.jl installs all of this for you. As long as drivers are installed all you need is CUDA.jl which is very mature and easy to use and supports the array interface. They basically use a "transpiler" to lower a subset of julia to native cuda. So everything in that subset "just works"
so determining if array is on the device is just in the type |
Running the tests gives some info about the stack that CUDA installs:
|
If the shtns library is compiled with cuda, when you create a plan, there will be internal copies of everything needed on gpu. The shtns plan itself is an object that stays on the cpu.
For the shtns C library, if the cuda toolkit is installed and the environment variable CUDA_PATH is correctly set, then So I guess a quick hack would be:
with CuArrays as type for This should work, I think. @AshtonSBradley could you try it ? Otherwise, I may give it a try before the end of the year. |
What I meant to say is the compilation of the C library I agree afterwards it's all taken care of Then, one can use their custom SHTns build until I manage to make the GPU-enabled |
Hi, very interested to see this happening. Is there any chance this will support CUDA.jl?
The text was updated successfully, but these errors were encountered: