Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
token refresh offset #12136
token refresh offset #12136
Changes from 4 commits
dc1a9b2
bd7ae61
6a58f5c
2a94b3d
51fbed8
9ed71f1
618c226
e6ab928
e235ad2
6d51768
beb9027
08fa31e
2bec84f
66fe30b
b2b4b43
4eb71a0
e060268
0d9e63e
9ba3910
10499c4
d93fdba
78518cd
82ad53a
429ca27
99e8921
a57f7fa
e08692f
101fa22
a38294a
173e26d
533ab8c
b6e1cc7
04dad9d
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be logging refreshes which fail here? Is this already done in _redeem_refresh_token?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good question.
I am leaning towards not logging it because:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic:
seems to be present in most if not all the credentials. Perhaps it could be moved into a base or mixin, and have the implementation just provide a callback or an override for the
# get new token
functionality?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. But different credentials have different ways to refresh/redeem tokens. So I have not found a clean way to do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think of something like this:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean we make a shared credential base?
I would like to have it into a separate issue/PR as code refactoring.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactoring always has a lower priority than new features. Merging this code is an open-ended commitment to maintaining it as is, so it's worth investigating a better organization now. The one I sketched may have its own problems (e.g. multiple inheritance would require some care) but it seems workable. What do you think? Have you tried something similar already?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think when we do refactoring by adding a shared class for all credentials, we can do further than only this. But I don't want to rush it right before a release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#12601
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be
or is there some rationale for always using 30 seconds?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not _token_refresh_timeout.
We don't have a clear design for this value but it must be less than _token_refresh_offset (default to 120). Or it will hide the auto refresh feature.
The old one 300 does not meet the requirement so I updated it to 30.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder whether we need an explicit margin here. The 1s margin in
if expires_on > int(time.time())
seems okay to me. My reasoning:token_refresh_offset=300
None
, prompting the caller to acquire a new tokentoken_refresh_offset
will now be observed by callers of this methodOne bad outcome that could follow is the caller using a token that expires in flight. That request will fail, but the caller's other option was to raise without sending the request at all, because it couldn't acquire a new token. It seems better to try the request, which could after all succeed.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The difference is when it is still in token_refresh_retry_timeout time frame.
Extreme case: user gets a token from us which expires in 1s. It is still in token_refresh_retry_timeout time frame so it does not get refreshed.
vs
They get None from us so it forces a refresh.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if the credential is waiting on the retry timeout, it won't try to get a new token, regardless of what it gets back from the cache. Returning
None
in that case only guarantees the current request will fail, no?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. if there is no valid token (it returns None), no matter it is in retry timeout window or not, we will try to get one.
Retry timeout only applies to there is A valid token but it is within the refresh offset window.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I overlooked this behavior. Credentials should observe the retry timeout when the cache is empty.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this is the behavior we want. If we have no access_token and the first attempt to get one failed, do we really want to hold all requests for 30 seconds before attempting to get one? I think we need to clarify this more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My opinion is if there is no one available, every time user calls our library to get one, we will try it w/o a cool down time.