-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for max pods per node to node pool. #2038
Add support for max pods per node to node pool. #2038
Conversation
I'm going to run all the container tests because of the revendor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like there's also a version of this that gets set on the cluster (for the default node pool), want to add that too? Also don't forget docs!
|
||
network = "${google_compute_network.container_network.name}" | ||
subnetwork = "${google_compute_subnetwork.container_subnetwork.name}" | ||
private_cluster = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to go ahead and just assume that all of these attributes are actually needed in order to set the field (including cidr_blocks = []
which makes me sad)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's true. :( GKE's interdependencies are very complex.
Documented - but the version that gets set on the cluster is a default in case the node pool does not have a policy, rather than a policy for the default node pool. I think that that is more work than it's worth right now - I don't expect a lot of users for this feature. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Fixes #2023.