-
Notifications
You must be signed in to change notification settings - Fork 807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Correct volume limits for i4i instance types #1699
Conversation
Is this a bug fix or adding new feature? This is a bug fix What is this PR about? / Why do we need it? This PR adds support for m6id and x2idn instance types to volume_limits.go so that the Kubernetes scheduler will be able to correctly determine the number of volumes available on these node types. This PR also removes the x2idn instance types from the non Nitro list. What testing is done? No direct testing was performed: Kubernetes tests on nodes based on the i4i.32xlarge instance type have shown that there is a gap between the number of [volume requiring] pods the Kubernetes scheduler allows to run on a node and the number of pods that are able to attach their respective volume. Our research brought us to determine that this is the place to make the change that will fix this problem. Similar issues were observed last year with nodes based on m5d.16xlarge & m5d.24xlarge and later with m6id.16xlarge & m6id.32xlarge. These older issues no longer exist. We concluded that a change to volume_limits.go introduced in March 2022 fixed the problem for m5d instance types and our own PR for m6id in December 2022 fixed the issue for m6id instance types and we want to do the same for i4i based nodes.
Hi @talnevo. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
@talnevo: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add an entry for i4i.large
as well? https://aws.amazon.com/ec2/instance-types/i4i/
Yes, I suppose you can add an entry for |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: torredil The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Is this a bug fix or adding new feature?
This is a bug fix
What is this PR about? / Why do we need it?
This PR adds support for i4i instance types to volume_limits.go so that the Kubernetes scheduler will be able to correctly determine the number of volumes available on these node types.
What testing is done?
No direct testing was performed: Kubernetes tests on nodes based on the i4i.32xlarge instance type have shown that there is a gap between the number of [volume requiring] pods the Kubernetes scheduler allows to run on a node and the number of pods that are able to attach their respective volume. Our research brought us to determine that this is the place to make the change that will fix this problem. Similar issues were observed last year with nodes based on m5d.16xlarge & m5d.24xlarge and later with m6id.16xlarge & m6id.32xlarge. These older issues no longer exist. We concluded that a change to volume_limits.go introduced in March 2022 fixed the problem for m5d instance types and our own PR for m6id in December 2022 fixed the issue for m6id instance types and we want to do the same for i4i based nodes.