Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add rate limiter to the Vertical Pod autoscaler updater component #2326
Add rate limiter to the Vertical Pod autoscaler updater component #2326
Changes from all commits
59df00f
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait
always asks for one token, so it doesn't make much sense to set the burst rate to anything else than 1. If I'm correct then we should remove theeviction-rate-limit-burst
parameter.Sorry, the
golang.org/x/time/rate
documentation is not really clear. Looking at the codeburst
is also the maximum number of tokens you can accrue over time. This is important as VPA couldn't suddenly evict lots of pods after not evicting anything for a long time.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without knowing the autoscaler well I'm a little bit worried if
Wait
will block here for long time (e.g. minutes). The state of the cluster might change significantly during that time.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An idea for solving the Wait()/blocking problem:
u.evictionRateLimiter.Allow()
returns true.u.evictionRateLimiter.Allow()
returns false then return from the RunOnce function. Any remaining pods to be evicted will be retried in the next RunOnce runThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your input 🙂
I'm not really sure why the Wait is an issue here :
In your proposed solution, we would hit the rate limiter quickly and exit. So we would "waste" n seconds of the ticker loop doing nothing. Running on fresher data is an excellent idea, but the only option I see would be to refresh the pod list between every Wait, which doesn't seem ideal to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default tick for the updater is 1 minute, so as @guillaumebreton writes, the Wait will be canceled after that time at most (unless someone uses a custom tick). So we can basically look at info that is stale up to 1 minute, which is also true if updater has a lot of work to do. I'm inclined to agree this solution is acceptable.
As to randomizing the list of pods to evict - the updater has a priority mechanism to select which pods to evict first based on multiple factors (including how far away they are from their recommended resources). Shuffling the list wouldn't play well with that.