-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VPA: Configurable container limit scaling #3028
VPA: Configurable container limit scaling #3028
Conversation
/assign @jbartosik |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the very long wait.
As I see demand for this feature, I am sold on the approach 👍I added some comments but nothing major.
Please do add tests and update the generated files (in a separate commit).
Ideally we would also have e2e test verifying turning limit scaling off works as expected. There are already tests for limit scaling.
vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go
Outdated
Show resolved
Hide resolved
...d-autoscaler/pkg/admission-controller/resource/pod/recommendation/recommendation_provider.go
Outdated
Show resolved
Hide resolved
...d-autoscaler/pkg/admission-controller/resource/pod/recommendation/recommendation_provider.go
Outdated
Show resolved
Hide resolved
795f4aa
to
2d63e89
Compare
2d63e89
to
8fdc63d
Compare
Thank you for your feedback, I addressed the first batch of review comments and will add tests tomorrow. |
30940f8
to
71ff275
Compare
@bskiba comments addressed and tests added :-) |
71ff275
to
3e664f1
Compare
/assign @bskiba |
I'll take another look later today |
Apologies, didn't make it today 😓, will take a look tomorrow. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once again sorry for taking forever.
Would it be possible to split into multiple commits?
- Added types
- Autogen
- Implementation
- e2e tests
- vendor and go.mod/go.sum changes
Also since there are changes in go.sum, can you run go mod vendor to see if vendor/ directory gets changed?
// Whether autoscaler limit scaling is enabled for the container. The default is "Auto". | ||
// Enabling this requires the autoscaler to be enabled for the container | ||
// +optional | ||
LimitMode *ContainerLimitScalingMode `json:"limitMode,omitempty" protobuf:"bytes,6,rep,name=limitMode"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've thought about this a bit more and I think it would be good to make the setting here more explicit and change this to sth like:
ControlledValues: RequestsOnly/RequestsAndLimits
with RequestsAndLimits being the default.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That reads better, updated :-)
// Specifies the type of recommendations that will be computed | ||
// (and possibly applied) by VPA. | ||
// If not specified, the default of [ResourceCPU, ResourceMemory] will be used. | ||
ControlledResources *[]v1.ResourceName `json:"controlledResources,omitempty" patchStrategy:"merge" protobuf:"bytes,5,rep,name=controlledResources"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any idea why ControlledResources shows up as diff here? I think it's already merged into master. Or are my eyes playing tricks on me?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The vendored e2e types don't include this in master https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/e2e/vendor/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L146-L149
3e664f1
to
95b7d1e
Compare
@bskiba rebased, implemented the requested changes and splitted the commits as requested :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we're close \o/ Thanks for structuring this and for your patience :)
Left two small comments.
Question: Is it possible in the RequestsOnly mode that we will try to create a pod with request > limit?
We need to avoid that, since it causes the pod to be rejected by API server.
vertical-pod-autoscaler/pkg/admission-controller/resource/vpa/handler.go
Outdated
Show resolved
Hide resolved
...oscaler/pkg/admission-controller/resource/pod/recommendation/recommendation_provider_test.go
Show resolved
Hide resolved
The travis fail seems unrelated (spellcheck failed in cluster autoscaler) |
95b7d1e
to
53e4758
Compare
@bskiba I added code & test to always stay below the limits for any recommendation provided. Is this what you had in mind? |
Ping @bskiba :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks, good, I have one more request for the capping to limit code.
Thanks for the additional tests!
@@ -70,6 +70,22 @@ func GetContainersResources(pod *core.Pod, vpaResourcePolicy *vpa_types.PodResou | |||
} | |||
} | |||
} | |||
// Ensure container requests always stay below container limits |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you take a look at this: 8bc98b7
This is a PR that removed recommendation to limit capping when we started doing proportional limit. The capping is applied to all recommendation parts (Target, Upper and Lower limit). Could you reuse that solution? (sorry for only providing it now)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bskiba updated. Does that look okay now?
44fe28a
to
86a3c30
Compare
86a3c30
to
727dab6
Compare
Ping @bskiba :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this change and thanks a lot for your patience with the reviewer (aka me)!
Looks good
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bskiba The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Make the proportional limit scaling configurable.
Follow up of #2359, #2362 and #2387
I'm happy to update the generated files and add tests if the approach looks acceptable.
//cc @Avanpourm @bskiba