-
Notifications
You must be signed in to change notification settings - Fork 807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Correct volume limits for m6id and x2idn instance types #1463
Conversation
Welcome @talnevo! |
Hi @talnevo. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/ok-to-test |
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sunir1, torredil The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Is this a bug fix or adding new feature?
This is a bug fix
What is this PR about? / Why do we need it?
This PR adds support for m6id and x2idn instance types to volume_limits.go so that the Kubernetes scheduler will be able to correctly determine the number of volumes available on these node types.
This PR also removes the x2idn instance types from the non Nitro list.
What testing is done?
No direct testing was performed: Kubernetes tests on nodes based on m6id.12xlarge, m6id.16xlarge, m6id.32xlarge, x2idn.16xlarge & x2idn.32xlarge instance types have shown that there is a gap between the number of [volume requiring] pods the Kubernetes scheduler allows to run on a node and the number of pods that are able to attach their respective volume.
Our research brought us to determine that this is the place to make the change that will fix this problem.
A similar issue was observed last year with nodes based on m5d.16xlarge & m5d.24xlarge. This older issue no longer exists. We concluded that a change to volume_limits.go introduced in March 2022 fixed the problem for m5d instance types and we want to do the same for m6id and x2ind based nodes.