-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
release-21.2: release-22.1: gcp,s3,azure: make the storage client upload chunk size configurable #87950
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This change adds a `cloudstorage.write_chunk_size` cluster setting that allows us to control the size of the chunks buffered by the cloud storage client when uploading a file to storage. The setting defaults to 8MiB. Prior to this change gcs used a 16MB buffer, s3 a 5MB buffer, and azure a 4MB buffer. A follow up change will add memory monitoring to each external storage writer to account for these buffered chunks during upload. This change was motivated by the fact that in google-cloud-storage SDK versions prior to v1.21.0 every chunk is given a hardcoded timeout of 32s to successfully upload to storage. This includes retries due to transient errors. If any chunk during a backup were to hit this timeout the entire backup would fail. We have additional work to do to make the job more resilient to such failures, but dropping the default chunk size might mean we see fewer chunks hit their timeouts. Release note: None
blathers-crl
bot
force-pushed
the
blathers/backport-release-21.2-80947
branch
from
September 14, 2022 13:42
e42ebe9
to
a437a1c
Compare
blathers-crl
bot
requested review from
msbutler,
adityamaru,
dt and
stevendanna
September 14, 2022 13:42
Thanks for opening a backport. Please check the backport criteria before merging:
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
Add a brief release justification to the body of your PR to justify this backport. Some other things to consider:
|
blathers-crl
bot
added
blathers-backport
This is a backport that Blathers created automatically.
O-robot
Originated from a bot.
labels
Sep 14, 2022
stevendanna
approved these changes
Sep 26, 2022
friendly ping! should we merge this? |
Reminder: it has been 3 weeks please merge or close your backport! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
blathers-backport
This is a backport that Blathers created automatically.
no-backport-pr-activity
O-robot
Originated from a bot.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backport 1/1 commits from #80947 on behalf of @blathers-crl[bot].
/cc @cockroachdb/release
Backport 1/1 commits from #80668 on behalf of @adityamaru.
/cc @cockroachdb/release
This change adds a
cloudstorage.write_chunk_size
cluster settingthat allows us to control the size of the chunks buffered by the
cloud storage client when uploading a file to storage. The setting defaults to
8MiB.
Prior to this change gcs used a 16MB buffer, s3 a 5MB buffer, and azure a 4MB
buffer. A follow up change will add memory monitoring to each external storage
writer to account for these buffered chunks during upload.
This change was motivated by the fact that in google-cloud-storage
SDK versions prior to v1.21.0 every chunk is given a hardcoded
timeout of 32s to successfully upload to storage. This includes retries
due to transient errors. If any chunk during a backup were to hit this
timeout the entire backup would fail. We have additional work to do
to make the job more resilient to such failures, but dropping the default
chunk size might mean we see fewer chunks hit their timeouts.
Release note: None
Release justification: low risk, high benefit change that reduces the chances of a backup failing because of chunk retry timeouts in the cloud storage sdk
Release justification: