-
Notifications
You must be signed in to change notification settings - Fork 807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[chart] Allow resources override for node DaemonSet + priorityClassName #732
[chart] Allow resources override for node DaemonSet + priorityClassName #732
Conversation
Welcome @dntosas! |
Hi @dntosas. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Pull Request Test Coverage Report for Build 1580
💛 - Coveralls |
c5c512b
to
e81e584
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ok-to-test
@@ -80,7 +81,7 @@ spec: | |||
timeoutSeconds: 3 | |||
periodSeconds: 10 | |||
failureThreshold: 5 | |||
{{- with .Values.resources }} | |||
{{- with .Values.node.resources }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not a big fan of this change. This would be a breaking change and force the operators to update their setup. Can we somehow give this new value priority and if it doesn't exist fallback to .Values.resources
? We can add a deprecation note for the old field on the values file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch ^^
Pushed some changes, adding a condition so the additions would be backwards compatible. Tell me if something like that would work for you :)
@@ -103,7 +104,7 @@ spec: | |||
mountPath: /csi | |||
- name: registration-dir | |||
mountPath: /registration | |||
{{- with .Values.resources }} | |||
{{- with .Values.node.resources }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly here. I understand this makes more sense, but we still have to make sure we don't break existing setups without any warning.
CSI controller and node have different needs in a manner of capacity so in this commit we enable users to define specific resources for the node component. This will allow users not to reserve not needed resources on all of their instances as node is a DaemonSet and may not need as much CPU/Memory as the controller Pods. Signed-off-by: dntosas <[email protected]>
CSI components and especially the node one, may be critical for operators so in this commit we enable them to define priorities for this kind of Pods. Signed-off-by: dntosas <[email protected]>
c9b1c39
to
5a03305
Compare
Looks great, thanks! It'd be great if you can follow-up with a small CR to add a comment for the resources fields so we have a deprecation warning. /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ayberk, dntosas The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
CSI controller and node have different needs in a manner of capacity so
in this commit we enable users to define specific resources for the node
component. This will allow users not to reserve not needed resources on
all of their instances as node is a DaemonSet and may not need as much
CPU/Memory as the controller Pods.
Is this a bug fix or adding new feature?
Adding a feature of overriding resource for the node
What is this PR about? / Why do we need it?
Mitigate overprovisioning capacity for node component
What testing is done?
Tested on our internal clusters