Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cloud/gcp: use the retry always policy for gcs #90352

Merged
merged 1 commit into from
Oct 24, 2022

Conversation

rhu713
Copy link
Contributor

@rhu713 rhu713 commented Oct 20, 2022

Previously the retry policy for GCS was retry idempotent. This retry policy allows reads to be retried, but was preventing the current write behavior from being retried. Given the at-least-once nature of the existing use cases of the GCS writer in CDC and backups, this patch changes the retry policy to always retry.

Fixes #90119

Release note: None

@rhu713 rhu713 requested a review from a team as a code owner October 20, 2022 15:26
@rhu713 rhu713 requested review from benbardin and removed request for a team October 20, 2022 15:26
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@rhu713 rhu713 requested review from dt and removed request for benbardin October 20, 2022 15:26
Copy link
Member

@dt dt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With an identical patch applied, by 45TB cluster succeeded in backing up once, after failing 3 times without it, however it then failed to perform another full backup on two subsequent attempts due to 503s. So it seems like we haven't "solved" the 503s yet, but perhaps this makes it better (or we're still just seeing noise). It seems correct to retry though so 👍 either way.

Previously the retry policy for GCS was retry idempotent. This retry policy
allows reads to be retried, but was preventing the current write behavior from
being retried. Given the at-least-once nature of the existing use cases of the
GCS writer in CDC and backups, this patch changes the retry policy to always
retry.

Fixes cockroachdb#90119

Release note: None
@rhu713
Copy link
Contributor Author

rhu713 commented Oct 24, 2022

bors r+

@craig
Copy link
Contributor

craig bot commented Oct 24, 2022

Build succeeded:

@blathers-crl
Copy link

blathers-crl bot commented Oct 24, 2022

Encountered an error creating backports. Some common things that can go wrong:

  1. The backport branch might have already existed.
  2. There was a merge conflict.
  3. The backport branch contained merge commits.

You might need to create your backport manually using the backport tool.


error creating merge commit from aa8ecce to blathers/backport-release-21.2-90352: POST https://api.github.com/repos/cockroachdb/cockroach/merges: 409 Merge conflict []

you may need to manually resolve merge conflicts with the backport tool.

Backport to branch 21.2.x failed. See errors above.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is otan.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

roachtest: backup/2TB/n10cpu4 failed
3 participants