Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add cudf::strings::compute_regex_state_memory API #10808
Add cudf::strings::compute_regex_state_memory API #10808
Changes from all commits
ee00388
6b09fd2
04f8806
68dfe59
1429d51
9eea981
edf4959
ca875c5
60e6837
c9078c3
a1ad54f
2ea7c9b
209ab39
b846702
89aa8f9
395ad51
0229815
fd6f08a
42eac7d
25c5b21
50d6a96
25bc135
fa45c23
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This API seems fine for a "just in time" check for how much memory the call will take, but currently the RAPIDS Accelerator does "up front" planning on the driver node (which does not necessarily have a GPU). At that point we don't know what the column data will be, just the pattern that will be used.
Is there a way to get a worst-case ballpark estimate of what memory will be used based solely on the pattern? Alternatively, it would be great if we could specify a maximum amount of memory we want the regex to use internally, and the concurrent rows would shrink to fit within that memory (or fail if it cannot).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now (with this API) you could call it with 1 row and do the math to help estimate the max size.
I'm trying to avoid another API to set the size unless absolutely necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But doesn't the row require having a device? We cannot require the driver node has a device.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This API does require a device.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently the API requires a string_view, but only to get the number of rows within it. Can we just pass the number of rows directly along with the pattern rather than a type that requires a device?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that sounds reasonable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jlowe is it worthwhile (or even possible) to give this API a shot in the Spark accelerator before we merge, rather than merging an API that we end up just removing a release or two later? It sounds like there are questions about its utility, so it would be nice to do our due diligence first if that is possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is possible to use this, but I am not sure on priorities for this right now. To use it we would have to essentially implement the API we want #10852 using this one, but in a worse way. Before doing the regexp call this and see how big the memory usage would be. If it is too large, then we slice the input into smaller pieces and call it again to see if we guessed right at where to slice it. If it looks good then we do the regexp kernel and then have to concat the results again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@revans2 so what is the best path forward here? Should we leave this PR up as is until the Spark team has a chance to decide on priorities and maybe give this a whirl?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really don't see us using this in the current form. We can keep the code around on this branch in case things change and we have to resurrect it. But I don't see a reason to check it in or keep the PR open.