-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add addon for aws node termination handler #9921
Conversation
pkg/apis/kops/cluster.go
Outdated
@@ -158,6 +158,9 @@ type ClusterSpec struct { | |||
CloudConfig *CloudConfiguration `json:"cloudConfig,omitempty"` | |||
ExternalDNS *ExternalDNSConfig `json:"externalDns,omitempty"` | |||
|
|||
// Components contains the configuration for kops components. | |||
Components *ComponentConfig `json:"components,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the definition of a "component"? What determines if something is a "component" versus an "addon" or just has its own subfield in the cluster spec?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for adding a new struct was to not further "bloat" the cluster spec. Was briefly mentioned on slack.
Addons was already used for those where you provide manifest urls and such. Didn't quite seem to quite fit what I wanted to do.
The reason for calling it component is that we the API spec for this is in componentconfig.go and that the options builders are also called components.
If this part goes in, I want to move CAS in there as well. In future spec versions, other components like kubedns, kubeproxy etc could go in there as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am ok with having a new structure to keep new addons separated. Depending on the agreed purpose, the name is not quite right.
I don't think kubedns, kubeproxy, ... can be moved here, because the have some established sections already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be clear: I am thinking of when we do a full new API version.
Naming things is hard, but it was the only one I could think of besides "addons" that was already in use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree about the "hard" part :). I would see kube-proxy, kube-... in an Kubernetes struct, not in the "addons" one that should be optional managed things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I could always add this to the addons struct instead, but it does feel a bit strange to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a new level to the v1alpha2 API makes it even more inconsistent than it already is. It might make sense to reorganize things when we can do a v1beta1 API, but adding a new level at this point is not going to make things any less confusing.
This is fundamentally an AWS-specific feature and should be off of the AWS-specific cloud provider settings. The fact that it is implemented as a bootstrap addon is an implementation detail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me. So, same way as autoscaler was done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moving it to the AWS struct sounds fine for me. But autoscaler stays where it is then.
// NodeTerminationHandlerConfig determines the cluster autoscaler configuration. | ||
type NodeTerminationHandlerConfig struct { | ||
// Enabled enables the node termination handler. | ||
// Default: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should default to true
when there is a non-nil
nodeTerminationHandler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Will fix that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have this behaviour anywhere else? Should we document and agree about it first at next office hours?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CAS did not have this. I know kubeproxy does, but that one is a bit special. Either way is fine for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kube-proxy has this behaviour because it is (historically) a required component.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Precedence would say that it should be disabled by default even if the struct is non-nil. I'd also think that is the most intuitive default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would disagree that it is the most intuitive default. Why would anyone specify a non-nil struct if they don't want that thing enabled? With the enabled field defaulting to false, specifying a non-nil struct and any other subfield will have no effect unless they also specify an explicit value for enabled
.
Numerous existing things, such as cloudControllerManager
, the authentication, authorization, networking, and API access options, etc. are enabled by specifying non-nil structs.
The point of an enabled
field is to be able to disable a whole subcomponent/feature which has multiple config settings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, but let's document somewhere for the future that we expect this behaviour for new components.
requests: | ||
cpu: 50m | ||
memory: 64Mi | ||
nodeSelector: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would make sense to at least have the option of adding a nodeSelector
that restricts it to "node-role.kubernetes.io/spot-worker": "true"
It might make sense to make that the default when scheduled event draining is disabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not against this one either. But perhaps as a follow-up?
f832258
to
8c83088
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: hakman, olemarkus The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
No description provided.