-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set MaxPods when using Amazon VPC CNI Plugin #6058
Conversation
Hi @ripta. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test Thanks @ripta ! |
/assign @andrewsykim Ready to review. |
Thanks for this @ripta - i was updating the cluster template as part of my workflow (via scripting). This is great default behavior. |
/lgtm |
Not familiar enough in this area to comment :) /unassign |
Looks great, but I do think we should avoid going >= 110, unless the user explicitly specifies kubelet.MaxPods >= 110. And in general I think we should do I also think it would be nice to avoid vendoring all of aws-vpc-cni, so that we can get closer to having this instance type DB move out of kops into a file in some shared repo. Then hopefully aws-vpc-cni and kops and the other projects can all refer to that file :-) |
(cherry picked from commit 92fd86f)
@justinsb - I addressed the comments and resolved the merge conflict. PTAL. /retest |
Thanks - let's get this into 1.11 then :-) /approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: chrisz100, justinsb, ripta The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
PR kubernetes#6058 addded dedup in a better way than I had previously done, so remove my more complicated and now superfluous second dedup.
PR kubernetes#6058 addded dedup in a better way than I had previously done, so remove my more complicated and now superfluous second dedup.
This PR closes #5510 and replaces it by fixing the merge conflict and implementing the automatic generation for
MaxPods
, as I've found the original author's patch to be very valuable and would love to see it merged.seen
map to machine_types.go, because the Pricing API was doing weird things, where it returned the same instance family multiple times (in some cases, up to 3 times).Also, @justinsb added a comment in the original PR about possibly lowering the MaxPods. I didn't make any changes here, but let me know if I should try to incorporate that. One one hand, if there are performance penalties with large numbers of pods, it sounds like the defaults should protect cluster operators from shooting themselves in the foot; OTOH, there may(?) exist legitimate use cases that people may have many small pods and want to pack more pods. Should it be a flag to allow folks to opt into running more pods than recommended (110 per node?)