Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make some s3 error retryable #5384

Merged
merged 2 commits into from
Sep 4, 2024
Merged

Conversation

trinity-1686a
Copy link
Contributor

Description

mark some S3 errors as retry-able. The code in qw-storage already uses aws_retry() on write requests. The actual error we get from GCS is SlowDown, but aws sdk contains a list of errors, so we that instead of hard-coding a single value;
improvement for #5211

How was this PR tested?

added unit test to Storage::put for retry.

@@ -19,6 +19,7 @@

#![allow(clippy::match_like_matches_macro)]

use aws_runtime::retries::classifiers::{THROTTLING_ERRORS, TRANSIENT_ERRORS};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice find.

@trinity-1686a trinity-1686a enabled auto-merge (squash) September 4, 2024 15:34
@trinity-1686a trinity-1686a merged commit 80364dd into main Sep 4, 2024
5 checks passed
@trinity-1686a trinity-1686a deleted the trinity/retry-rate-limited branch September 4, 2024 15:47
Copy link

github-actions bot commented Sep 4, 2024

On SSD:

Average search latency is 0.992x that of the reference (lower is better).
Ref run id: 3241, ref commit: 0820c90
Link

On GCS:

Average search latency is 1.01x that of the reference (lower is better).
Ref run id: 3242, ref commit: 0820c90
Link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants