-
Notifications
You must be signed in to change notification settings - Fork 540
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Small doc fix #5375
Small doc fix #5375
Conversation
@@ -396,7 +396,7 @@ void all_points_membership_vectors(const raft::handle_t& handle, | |||
size_t n = prediction_data.n_cols; | |||
|
|||
if (batch_size > m) batch_size = m; | |||
RAFT_EXPECTS(0 < batch_size && batch_size <= m, "Invalid batch_size. batch_size should be > 0 and <= the number of samples in the training data"); | |||
ASSERT(0 < batch_size && batch_size <= m, "Invalid batch_size. batch_size should be > 0 and <= the number of samples in the training data"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the macro change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because it is not a raft related exception. It is a condition that needs to be true within cuML itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which header is ASSERT
being pulled from, do you know?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is from raft/core/error.hpp
. But it looks like that will get replaced with RAFT_EXPECTS
. Should I change the ASSERT back to RAFT_EXPECTS
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's what I expected
1000 points at a time. The default batch_size is 4096. If the number | ||
of rows in the original dataset is less than 4096, this defaults to | ||
the number of rows. | ||
all of the training points, batch_size of 1000 computes distance |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all of the training points, batch_size of 1000 computes distance | |
all of the prediction points, batch_size of 1000 computes distance |
smaller batches of points in the prediction data. Batch size of 0 | ||
uses all of the prediction points, batch_size of 1000 computes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same for above
smaller batches of points in the prediction data. Batch size of 0 | |
uses all of the prediction points, batch_size of 1000 computes | |
smaller batches of points in the prediction data. Batch size of 0 | |
implies using all prediction points in a single batch, batch_size of 1000 computes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have gotten rid of the batch size of 0 option. So, I have deleted that part. In order to obtain membership vectors for all points, users have to pass the the number of rows as the batch size.
/merge |
No description provided.