-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel prepare - open file on needed #355
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #355 +/- ##
==========================================
- Coverage 89.59% 89.58% -0.01%
==========================================
Files 17 18 +1
Lines 4929 4982 +53
==========================================
+ Hits 4416 4463 +47
- Misses 513 519 +6
|
/* Event loop to schedule IO work related on and requires to be serial, ie, reading from non-parallel streams, | ||
* streaming parts back to the caller, etc... After the meta request is finished, this will be reset along with the | ||
* client reference.*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
trivial: revert
I don't know what this edit is trying to say
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to edit because for preparation in parallel now happens across different eventloops, instead of the only one assigned with the request.
Co-authored-by: Michael Graeb <[email protected]>
Co-authored-by: Michael Graeb <[email protected]>
Parallel preparation, the parallel stream will open the file when reading from it and seek to the right offset to read, and close the file after reading.
We checked it has no performance impact with mmap impl, and it's simpler. Simpler is better
With this approach, the user of the parallel input stream will have to manage the threads and concurrence, but it's only used by our s3 client, which already handles the concurrence and thread pool, we are good.
Original: #353
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.