-
Notifications
You must be signed in to change notification settings - Fork 540
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW] Moving MNMG decomp to cuml #2427
Conversation
Please update the changelog in order to start CI tests. View the gpuCI docs here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First pass looks great. Mostly minor things w.r.t the move.
Other things are just pending refactors / cleanup that we should probably track in Github issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly my requests are just t provide references back to the relevant github issues for tracking (in case not all of it gets done as part of this refactor, it will make it easier for future eyes to see the intentions and pull up the relevant issue).
Otherwise, it LGTM, pending successful CI of course.
const std::shared_ptr<deviceAllocator> allocator = | ||
handle.getImpl().getDeviceAllocator(); | ||
|
||
int len = prms.n_cols * prms.n_cols; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn’t need to be done in this pr. I wanted to reference the issue here for future tracking.
cpp/test/mg/pca.cu
Outdated
components = (T*)allocator->allocate( | ||
prmsPCA.n_components * prmsPCA.n_cols * sizeof(T), stream); | ||
|
||
explained_var = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn’t have any effect on our ability to use allocator vs device_buffer. You would just need to call buffer_ptr.data()
. We really should be using the device_buffer everywhere since it is RAII and not using trhe allocator directly anymore.
rerun tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
No description provided.