Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-22.1: gcp,s3,azure: make the storage client upload chunk size configurable #80947

Merged
merged 1 commit into from
Jun 28, 2022

Conversation

blathers-crl[bot]
Copy link

@blathers-crl blathers-crl bot commented May 3, 2022

Backport 1/1 commits from #80668 on behalf of @adityamaru.

/cc @cockroachdb/release


This change adds a cloudstorage.write_chunk_size cluster setting
that allows us to control the size of the chunks buffered by the
cloud storage client when uploading a file to storage. The setting defaults to
8MiB.

Prior to this change gcs used a 16MB buffer, s3 a 5MB buffer, and azure a 4MB
buffer. A follow up change will add memory monitoring to each external storage
writer to account for these buffered chunks during upload.

This change was motivated by the fact that in google-cloud-storage
SDK versions prior to v1.21.0 every chunk is given a hardcoded
timeout of 32s to successfully upload to storage. This includes retries
due to transient errors. If any chunk during a backup were to hit this
timeout the entire backup would fail. We have additional work to do
to make the job more resilient to such failures, but dropping the default
chunk size might mean we see fewer chunks hit their timeouts.

Release note: None


Release justification: low risk, high benefit change that reduces the chances of a backup failing because of chunk retry timeouts in the cloud storage sdk

@blathers-crl blathers-crl bot requested review from a team and adityamaru and removed request for a team May 3, 2022 22:23
@blathers-crl blathers-crl bot force-pushed the blathers/backport-release-22.1-80668 branch from 9625f2d to 8c63305 Compare May 3, 2022 22:23
@blathers-crl
Copy link
Author

blathers-crl bot commented May 3, 2022

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Patches should only be created for serious issues or test-only changes.
  • Patches should not break backwards-compatibility.
  • Patches should change as little code as possible.
  • Patches should not change on-disk formats or node communication protocols.
  • Patches should not add new functionality.
  • Patches must not add, edit, or otherwise modify cluster versions; or add version gates.
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters.
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.

Add a brief release justification to the body of your PR to justify this backport.

Some other things to consider:

  • What did we do to ensure that a user that doesn’t know & care about this backport, has no idea that it happened?
  • Will this work in a cluster of mixed patch versions? Did we test that?
  • If a user upgrades a patch version, uses this feature, and then downgrades, what happens?

@blathers-crl blathers-crl bot requested review from dt and stevendanna May 3, 2022 22:23
@blathers-crl blathers-crl bot added blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot. labels May 3, 2022
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@adityamaru
Copy link
Contributor

friendly ping @dt @stevendanna, I think this should reduce some of the 503s we've been seeing

This change adds a `cloudstorage.write_chunk_size` cluster setting
that allows us to control the size of the chunks buffered by the
cloud storage client when uploading a file to storage. The setting defaults to
8MiB.

Prior to this change gcs used a 16MB buffer, s3 a 5MB buffer, and azure a 4MB
buffer. A follow up change will add memory monitoring to each external storage
writer to account for these buffered chunks during upload.

This change was motivated by the fact that in google-cloud-storage
SDK versions prior to v1.21.0 every chunk is given a hardcoded
timeout of 32s to successfully upload to storage. This includes retries
due to transient errors. If any chunk during a backup were to hit this
timeout the entire backup would fail. We have additional work to do
to make the job more resilient to such failures, but dropping the default
chunk size might mean we see fewer chunks hit their timeouts.

Release note: None
@adityamaru adityamaru force-pushed the blathers/backport-release-22.1-80668 branch from 8c63305 to 147bd0a Compare May 24, 2022 15:04
@@ -137,7 +137,7 @@ func (s *azureStorage) Writer(ctx context.Context, basename string) (io.WriteClo
defer sp.Finish()
_, err := azblob.UploadStreamToBlockBlob(
ctx, r, blob, azblob.UploadStreamToBlockBlobOptions{
BufferSize: 4 << 20,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should re-test azure (by hand I guess) with this before we backport. I have vague recollections of it having some hard coded/magic numbers on these chunks but I don't remember what they were, so I just want to confirm we're not breaking (untested) code in a stable release here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yikes this fell off my radar, running them now and merging.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ran all the tests in pkg/cloud/azure and they passed, merging!

@github-actions
Copy link

Reminder: it has been 3 weeks please merge or close your backport!

@adityamaru adityamaru merged commit f23d710 into release-22.1 Jun 28, 2022
@adityamaru adityamaru deleted the blathers/backport-release-22.1-80668 branch June 28, 2022 20:20
@blathers-crl
Copy link
Author

blathers-crl bot commented Jun 28, 2022

Encountered an error creating backports. Some common things that can go wrong:

  1. The backport branch might have already existed.
  2. There was a merge conflict.
  3. The backport branch contained merge commits.

You might need to create your backport manually using the backport tool.


error getting backport branch release-pr-activity: unexpected status code: 404 Not Found

Backport to branch pr-activity failed. See errors above.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is otan.

@adityamaru
Copy link
Contributor

blathers backport 21.2

@blathers-crl
Copy link
Author

blathers-crl bot commented Sep 14, 2022

Encountered an error creating backports. Some common things that can go wrong:

  1. The backport branch might have already existed.
  2. There was a merge conflict.
  3. The backport branch contained merge commits.

You might need to create your backport manually using the backport tool.


Backport to branch 21.2 failed. See errors above.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is otan.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blathers-backport This is a backport that Blathers created automatically. no-backport-pr-activity O-robot Originated from a bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants