-
Notifications
You must be signed in to change notification settings - Fork 555
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEA] cuML to expose a "proper" CUDA API #92
Comments
Some more concrete descriptions of what needs to be done...
Note: the 'Allocator' thingy can only work after the PR #167 is done! Vinay Deshpande (@vinaydes) will be helping us here as a "starter" task for his onboarding on cuml. |
@teju85 Exposing library handlers, streams, etc is a pretty good idea. We should definitely include this in the coming versions. |
@teju85 Maybe, we might not need the cumlAlloc or its variations because we use cuDF for this kind of operations. |
I see your point regarding alloc/free functions. Those were added in order to be used by cuml and/or ml-prims, where the function needs temporary allocations. Your point makes sense. Let's not expose these 2 methods, instead, have the custom allocator being used totally internally inside cuml and ml-prims. |
@oyilmaz-nvidia @vinaydes Updated the interface proposal above based on the above feedback. |
I was going to recommend that we have some global handle for being able to track workspace allocations. I'm 100% for this. |
@cjnolet Let's try to avoid such global vars as much as possible. |
I misread the description on this issue. I was thinking along the lines of #186, which would also be good to standardize in the CUDA API. |
Agree @cjnolet . I just tagged you on that issue and mentioned the same thing. This allocator being discussed here should simplify workspace allocation logic by a lot. |
@teju85 regarding the allocation, the idea works perfectly, and also allows us to start using RMM for the allocations that cuDF doesn’t handle. @oyilmaz-nvidia In fact we don’t always depend on cuDF for allocation, our python apis accept host numpy arrays that are then transferred to the GPU with numba, which will change to use RMM. I also absolutely love the proposed cuml_handle. On the other hand, regarding error handling, since python is our main end user interface for the time being, exceptions are going to give us a more robust error reporting infrastructure for that, and in general we are not limiting cuML to be a C api (so the c api can be a wrapper around the c++, which it currently mostly is), so at least at the c++ level we can raise the exceptions. Which is what RMM and cuDF are moving to soon. (Hope I was clear in that explanation, it’s early here) |
@teju85 now that its a little later in the day I just wanted to clarify. In my comment above, I am trying to say that I favor both having the error code and throwing exceptions. Languages that can link to C++ libraries great the benefit of exceptions, languages without C++ support only get the |
Fair enough. Makes sense @dantegd @jirikraus and @vinaydes, there's a request to add |
Just explicitly adding a reference to the PR addressing this. #247 |
[REVIEW] Proposal for "proper" C/C++ API (issue #92)
[REVIEW] No random dataset for tsvd and pca algorithm
Is your feature request related to a problem? Please describe.
We currently are not exposing the following things from our C/C++ API:
The advantages of doing these are:
Describe the solution you'd like
One solution can be to:
cumlHandle_t
structure (just like cudnn/cublas/cufft/cusolver).Describe alternatives you've considered
There are no alternatives currently.
Additional context
None.
Note
Just like #77 , I'm mostly filing this issue so that it doesn't slip away. Please feel free to set the priority for this accordingly, @datametrician @dantegd .
The text was updated successfully, but these errors were encountered: