-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[App Config] Handle throttling - do not hang - should honor abort signal #15721
Conversation
* Maximum wait duration for the expected event to happen = `10000 ms`(default value is 10 seconds)(= maxWaitTimeInMilliseconds) | ||
* Keep checking whether the predicate is true after every `1000 ms`(default value is 1 second) (= delayBetweenRetriesInMilliseconds) | ||
*/ | ||
export async function checkWithTimeout( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I should move this to core-utils?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The entire throttling policy really should end up in the core-http (or similar) library.
The only real difference between this one and the 'official' is that our AppConfig policy reacts based on a thrown exception. The other one assumes that it'll just be part of the response with no thrown error - perhaps that never actually worked.
return this._nextPolicy.sendRequest(httpRequest.clone()).then((response) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might not be worth it if app-config is the only service that has retry-after-ms header with the error.
If we see at least one other service that does this, we can move this policy to the core?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering both ways - do we think the existing policy is ever actually used?
I don't think it's our code that's doing the throw so I'm wondering if the core-http policy works at all. Perhaps this is a question we can answer by going through some swaggers at some point.
sdk/appconfiguration/app-configuration/src/policies/throttlingRetryPolicy.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good. Left some more comments.
sdk/appconfiguration/app-configuration/test/public/throttling.spec.ts
Outdated
Show resolved
Hide resolved
sdk/appconfiguration/app-configuration/src/policies/throttlingRetryPolicy.ts
Outdated
Show resolved
Hide resolved
sdk/appconfiguration/app-configuration/src/policies/throttlingRetryPolicy.ts
Outdated
Show resolved
Hide resolved
sdk/appconfiguration/app-configuration/src/policies/throttlingRetryPolicy.ts
Outdated
Show resolved
Hide resolved
sdk/appconfiguration/app-configuration/src/policies/throttlingRetryPolicy.ts
Outdated
Show resolved
Hide resolved
…to harshan/app-config/issue/15576
…to harshan/app-config/issue/15576
sdk/appconfiguration/app-configuration/src/policies/throttlingRetryPolicy.ts
Outdated
Show resolved
Hide resolved
timeoutMessage: `Unable to fulfill the request in ${delayInMs}ms when retried.` | ||
}); | ||
await delay(delayInMs, httpRequest.abortSignal, StandardAbortMessage); | ||
if (httpRequest.abortSignal?.aborted) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's almost redundant to call abortSignal here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, kept it just in case. Should I remove it?
My point is.. the next sendRequest call should never be invoked if the request has been aborted before, and this felt to be the way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, keep it. It's correct, it just looks redundant because technically 'delay' does the same check. But, being 'async', it's possible for abortSignal.aborted to finally take affect and have it only be available afterwards.
(kind of like double-checked locking)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks really great. Ping me when you've undrafted it.
Can you reload? it isn't a draft. :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test needs to be tightened up a bit.
sdk/appconfiguration/app-configuration/test/internal/throttlingRetryPolicyTests.spec.ts
Outdated
Show resolved
Hide resolved
sdk/appconfiguration/app-configuration/src/policies/throttlingRetryPolicy.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
Hello @HarshaNalluru! Because this pull request has the p.s. you can customize the way I help with merging this pull request, such as holding this pull request until a specific person approves. Simply @mention me (
|
…nal (Azure#15721) ## Description ### Issue - Azure#15576 - Azure/AppConfiguration#503 ### Changelog - High request rate would result in throttling. SDK would retry on the failed requests based on the service suggested time from the `retry-after-ms` header in the error response. If there are too many parallel requests, retries for all of them may also result in a high request rate entering into a state which might seem like the application is hanging forever. - [Azure#15721](Azure#15721) allows the user-provided abortSignal to be taken into account to abort the requests sooner. - More resources - [App Configuration | Throttling](https://docs.microsoft.com/en-us/azure/azure-app-configuration/rest-api-throttling) and [App Configuration | Requests Quota](https://docs.microsoft.com/en-us/azure/azure-app-configuration/faq#which-app-configuration-tier-should-i-use) Fixes Azure#15576
Fixes #15796 ## Problem The throttlingRetryPolicy in core-http has the potential to retry for an extended period if the service continues returning "retry after" headers on subsequent calls. Here's the snippet of code that handles the "retry after" retries: ```typescript public async sendRequest(httpRequest: WebResource): Promise<HttpOperationResponse> { return this._nextPolicy.sendRequest(httpRequest.clone()).catch((err) => { // other code elided.... return delay(delayInMs).then((_: any) => this.sendRequest(httpRequest.clone())); ``` ## Solution Update delay such that it respects abort signal. Similar to what I had to do for app-config at #15721
[Hub Generated] Review request for Microsoft.ContainerInstance to add version stable/2021-07-01 (Azure#15721) * Adds base for updating Microsoft.ContainerInstance from version stable/2021-03-01 to version 2021-07-01 * Updates readme * Updates API version in new specs and examples * Adding subnet IDs * Adding fixes from pending 03-01 update * Changing definition name to match convention * Adding network dependencies API * Adding MSI+ACR properties * Removing network profile * Updating example * Fixing JSON error * Removing network profile reference * Adding Integer format where missing * Removing comma * Fixing example * Fixing example * Ran Prettier to resolve check failure * Running prettier again for format
Description
Issue
ThrottlingRetryPolicy
can retry infinitely #15576Changelog
retry-after-ms
header in the error response. If there are too many parallel requests, retries for all of them may also result in a high request rate entering into a state which might seem like the application is hanging forever.Fixes #15576