-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert print vector changes because of std::vector<bool> #681
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, good point :) still, let us not have manual allocations
CUDA_CHECK( | ||
cudaMemcpy(host_mem.data(), devMem, componentsCount * sizeof(T), cudaMemcpyDeviceToHost)); | ||
print_host_vector(variable_name, host_mem.data(), componentsCount, out); | ||
T* host_mem = new T[componentsCount]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest to use unique_ptr
to avoid a memory leak on exception, e.g.
auto host_mem = std::make_unique<T[]>(componentsCount);
@gpucibot merge |
Continuation of #681 Authors: - Micka (https://github.com/lowener) Approvers: - Corey J. Nolet (https://github.com/cjnolet) URL: #695
The specialization of
std::vector
whenT=bool
is unfortunately causing compilation issue in cuml because thedata()
function member is not implemented. And the elements may not be stored contiguously.(Link to the CI failure: https://gpuci.gpuopenanalytics.com/job/rapidsai/job/gpuci/job/cuml/job/prb/job/cuml-cpu-cuda-build-arm64/CUDA=11.5/1364/console)
cc @achirkin