-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add some more documentation to clarify how labels and GPUs work with the #2924
Add some more documentation to clarify how labels and GPUs work with the #2924
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @AaronKalair! |
CLA Signed |
/assign @Jeffwan |
@@ -162,6 +162,40 @@ If you'd like to scale node groups from 0, an `autoscaling:DescribeLaunchConfigu | |||
} | |||
``` | |||
|
|||
### Gotchas | |||
|
|||
Without these tags, when the cluster autoscaler needs to increase the number of nodes, if a node group creates nodes with taints that the pending pod does not tolerate then the cluster autoscaler will only learn about this after the node has been created and it sees that it is tainted. From this point on this information will be cached and subsequent scaling operations will take this into account, but it means that the behaviour of the cluster autoscaler differs between the first and subsequent scale up requests and can lead to confusion. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! This is great. Do you want to tell the reason GPU takes time to start up here? Like device plugin takes time to advertise resources to APIServer which makes pod can not be scheduled on the ready node. That's the major reason CA have unnecessary scale up again. With the label, CA will wait for GPU to be ready.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added, and improved the formatting in the latest commit, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. One more thing, could you squash the commits to 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have squashed the commits and force pushed
ClusterAutoScaler on AWS
49624ab
to
19a78f6
Compare
/lgtm Thanks again for the contribution! @AaronKalair Let me know if you have other GPU use case not implemented in CA |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Jeffwan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi,
I experienced some confusion when trying to use the Cluster Autoscaler and so I've tried to document some of the things I ran into to help others in the future.