-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: regulation-worker changes for oauth destinations #2730
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #2730 +/- ##
=========================================
Coverage ? 46.74%
=========================================
Files ? 298
Lines ? 48951
Branches ? 0
=========================================
Hits ? 22881
Misses ? 24605
Partials ? 1465
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
- update error message to include actual response - update naming - making maxRetryAttempts as configurable - include jobId in logs
// prepares payload based on (job,destDetail) & make an API call to transformer. | ||
// gets (status, failure_reason) which is converted to appropriate model.Error & returned to caller. | ||
func (api *APIManager) Delete(ctx context.Context, job model.Job, destination model.Destination) model.JobStatus { | ||
func (api *APIManager) deleteWithRetry(ctx context.Context, job model.Job, destination model.Destination, retryAttempts int) model.JobStatus { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
retryAttempts -> retryAttempt or retryAttemptCount or something like makes more sense as it is indicating, up till how many attempts happened right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can go either way:
- count how many retries happened until now
- current retry number
I'd prefer the latter.
OAuth: OAuth, | ||
MaxOAuthRefreshRetryAttempts: config.GetInt("RegulationWorker.oauth.maxRefreshRetryAttempts", 1), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd not tie this variable to OAuth as it's a delete retry we are doing rather than a "refresh retry".
Moreover, we might retry a delete request due to other failure scenarios that could possibly succeed in the next retry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The retry only is required for the OAuth case in order to prevent the case where in a failed job is retried after the token's validity period.
|
||
if isOAuthEnabled && isTokenExpired(jobResp) { | ||
if isOAuthEnabled && isTokenExpired(jobResp) && currentOauthRetryAttempt < api.MaxOAuthRefreshRetryAttempts { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what the jobStatus
will be when isOAuthEnabled && isTokenExpired && currentOauthRetryAttempt >= api.MaxOAuthRefreshRetryAttempts
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whatever is calculated as part of getJobStatus()
that we got in the last retry(when currentOauthRetry = api.MaxOAuthRefreshRetryAttempts - 1
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is already a retry mechanism such that regulation-manager would provide the already failed jobs to regulation-worker to send it to destinaiton, so that way I feel it should be ok. Let me know if you think otherwise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will try to rephrase the question:
What will the transformer's HTTP response code be whenever it returns in its response body at least 1 entry with authErrorCategory=REFRESH_TOKEN
? Is it deterministic or not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whenever atleast one entry of authErrorCategory = REFRESH_TOKEN
is present in response body, we will return a status-code of 500
as transformer http status-code as of today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you say deterministic, are you asking about how do we identify perpetual refresh token failures, I mean the scenario wherein even after multiple retries(when api.MaxOAuthRefreshRetryAttempts > 1
), we are still getting refresh_token
error from transformer or may be it is something else ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just trying to make sure that the jobStatus will be model.JobStatusFailed
if the token refresh fails or we have reached to maximum refresh attempts :)
Description
Initially in regulation-worker for OAuth destinations as soon as the refresh token request is complete, we would retry the regulation worker job recursively. It is possible that the recursion happens for infinite number of times(according to source-code but in actual case it can be rarely seen).
To prevent the risk of recursive calls happening infinite number of times for a job. We want to restrict it to only
once
This PR includes this change.
Several other things included
Notion Ticket
Notion
Security