-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve BatchElements documentation #32082
Merged
Merged
Changes from 1 commit
Commits
Show all changes
7 commits
Select commit
Hold shift + click to select a range
bf46735
Imporve BatchElements documentation
jrmccluskey c149cb8
Add link to new documentation
jrmccluskey c8ecf04
Update sdks/python/apache_beam/transforms/util.py
jrmccluskey 7748946
linting
jrmccluskey de328ef
Apply suggestions from code review
jrmccluskey a641344
line-too-long
jrmccluskey 54a9fde
Update sdks/python/apache_beam/transforms/util.py
tvalentyn File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is xput the right term here? I feel like longer duration should increase xput, because we reduce the per-element overhead. At least, if we measure xput for over a sufficiently long duration, say elements per hour.
However the added latency might result in increased data freshness reading for downstream stages. https://cloud.google.com/dataflow/docs/guides/using-monitoring-intf#data_freshness_streaming.
WDYT about the following:
Larger max_batch_duration_secs values might increase the overall the throughput of the transform, but might negatively impact the data freshness on downstream transforms due to added latency. Smaller values will have less impact on data freshness, but might make batches smaller than the target batch size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Throughput is the right term, that batching can create a bottleneck.
The documentation at https://beam.apache.org/documentation/patterns/batch-elements/ should outline it more clearly as far as tuning, I think routing users there along with the new docstring content will help a lot
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so batching can cause the pipeline to emit less # of elements per sufficiently large unit of time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potentially, yes. The slowdown is most pronounced when looking at the incomplete bundle case. If we weren't using the stateful batching, elements are emitted downstream as they arrive to BatchElements since they're single-element bundles. If nine elements arrive within the span of five seconds, you've emitted those nine elements in that span as well (abstracting away any overhead from the code emitting the bundles of 1.) Meanwhile, if we're statefully batching and the target batch size is greater than 9 and our maximum buffer time is greater than 5 seconds, we'd be emitting those 9 elements at a later time, but together. We're just artificially increasing the denominator in the fraction. The hope is that this tradeoff has some benefit to the downstream operation that is worth this bottleneck potential, but the documentation around that tradeoff was lacking prior