You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 26, 2024. It is now read-only.
I really really don't think we should do this for all our fast code since it's going to be a lot of extra work, but I think doing it once will give us a sense of what is achievable and provide a useful reference point for what is achievable.
The text was updated successfully, but these errors were encountered:
tbenthompson
changed the title
Write a raw CUDA implementation of fast_inla. I really really don't think we should do this for all our fast code since it's going to be a lot of extra work, but I think doing it once will give us a sense of what is achievable and provide a useful reference point for other benchmarks of the various JIT tools (jax, pytorch).
Raw CUDA Berry INLA implementation
Jul 14, 2022
I really really don't think we should do this for all our fast code since it's going to be a lot of extra work, but I think doing it once will give us a sense of what is achievable and provide a useful reference point for what is achievable.
The text was updated successfully, but these errors were encountered: