-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added PolicyConfigMap and PolicyConfigMapNamespace to KubeSchedulerConfig #3546
Conversation
Hi @whs. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! Couple of questions. Which versions of k8s support the configmap? If we have support at the beginning of a specific version, we will need validation. Does the scheduler start without the configmap in place? Can you add some documentation in the configspec markdown doc under docs?
Always appreciate the help!
pkg/apis/kops/componentconfig.go
Outdated
@@ -330,6 +330,10 @@ type KubeSchedulerConfig struct { | |||
Image string `json:"image,omitempty"` | |||
// LeaderElection defines the configuration of leader election client. | |||
LeaderElection *LeaderElectionConfiguration `json:"leaderElection,omitempty"` | |||
// Name of configmap to use for scheduler policy | |||
PolicyConfigMap string `json:"policyConfigMap,omitempty" flag:"policy-configmap"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we re-word the API elements into the typical go syntax for comments? The first word of the comment is the name, in this case PolicyConfigMap
.
See https://golang.org/doc/effective_go.html#commentary. Comments documenting declarations should be full sentences, even if that seems a little redundant. This approach makes them format well when extracted into godoc documentation. Comments should begin with the name of the thing being described and end in a period:
/ok-to-test |
|
So we have a bit of a chicken and egg problem with the cluster starting properly. @whs the chicken and egg problem is that in a usual case we need to cluster to start. The cluster depends on the scheduler to get components like CNI running, which is a base component. Would be optimal if this configmap was reloadable. We will need to validate that the k8s cluster is over 1.7.x. You can see how we are validating at a component level here: https://github.com/kubernetes/kops/blob/master/pkg/model/components/kubecontrollermanager.go There is a go file under the same package for the scheduler.
We can validate that the k8s cluster version is compatible with the cli option. The chicken and egg problem with k8s being up and the configmap is kinda bugging me.
K8s or kops? |
In this file only kubelet is documented. Are there other place in kops docs that this option should be documented or should I add to that file? |
@whs we do not have full documentation in that markdown file, but when new changes come in we try to add them there. Just cut a new section for the scheduler. |
If we throw an error when this key is present during cluster creation, only
allowing it with update, would that work?
…On Oct 6, 2017 9:37 AM, "Chris Love" ***@***.***> wrote:
@whs <https://github.com/whs> we do not have full documentation in that
markdown file, but when new changes come in we try to add them there.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3546 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAVFi_AoAcBqgcfarbWUKaIlcNf9Gumgks5spZJxgaJpZM4Pvl4Q>
.
|
All pending concerns should be now resolved, except for that the cluster still won't start with this flag on. Either we do something about that or just document it. |
So this PR as it stands is good. We can map flags, and you've done that perfectly (you've even matched the casing in So the question is where should the configuration live, I guess. Is there a default scheduler configuration we could create? (just as we create a default LimitRange, for example: https://github.com/kubernetes/kops/blob/master/upup/models/cloudup/resources/addons/limit-range.addons.k8s.io/v1.5.0.yaml ). This feels like the first step towards componentconfig, which is super exciting. I'm guessing what we should do is to automatically create a default scheduler configmap (perhaps in 1.9), and point the scheduler to read from that configmap. We'd have to make sure we didn't replace it on kops/kubernetes updates I guess. Does that sound right @whs ? /lgtm |
The default config for kube-scheduler is hardcoded in scheduler's source. It should be like this if written as JSON. I'm not sure that providing a default scheduler configmap would be the best solution here as we would have to keep track of scheduler default policy update, and update the file. (for example, when pod affinity it was implemented by adding a predicate and priority) Also, it leads to the question on what should we do if the user updated the file and upgrade the cluster. To be clear, if the user doesn't provide Providing default configmap if the user specified this option is one way we could make this work, but it would be complicated. Since this is an option for advanced users, I think noting in the docs and godoc that this option will not work on cluster creation should be fine. Is this ok with you? And is it possible to detect cluster creation when validating the spec so I can add this check in the code as well? |
I added the docs. I think we're ready to merge. |
/lgtm Hopefully they'll add reloading once it becomes clearer how reloading should work :-) |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: justinsb The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
@whs something I thought about. We could have kops manage the config map. |
That could work, and was pointed out by @justinsb earlier as well. As this option is optional (if you don't specify anything the scheduler have its hardcoded default), I think adding kops-managed scheduler policy would add more burden to the kops maintainers, as we need to keep track of upstream policy changes. |
@whs only create the config map if the user wants it. And set the scheduler flag. We have dynamic addons. Let me know if you are interested in how to do this. |
/test all [submit-queue is verifying that this PR is safe to merge] |
I think I have some idea now. Give me an hour.. |
Automatic merge from submit-queue. |
Automatic merge from submit-queue. UsePolicyConfigMap for kube-scheduler Continued from #3546 In this version, a single option `usePolicyConfigMap` is added that will install scheduler.addons.k8s.io, which contains a default configmap.
No description provided.