-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update coredns for security best practice #8550
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Hi @Ewocker. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Ewocker The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
/retest |
I'm not quite sure what to do with this apparent bug in the We're execing |
In researching for the kops office hours, I found that "server side apply" is the Kubernetes-level fix for the broken strategic patch. That is a beta feature right now. The decision from office hours is that server side apply is too new and risky to take on now. The approach would be to get channels to use delete-and-replace instead of apply. The question then becomes how to decide to do delete-and-replace. Some options I see:
We should consider whether an option can also address the immutable field problem. |
While this isn't directly related, I think one other issue that could benefit from adjusting how Kops applies channels is the removal of resources. If a manifest no longer defines a resource (either during an upgrade or downgrade) |
@rifelpet In this case while transitioning from kubedns to coredns, would kubedns be removed? Is that what we want, or we want the user to manually remove one of them? |
The current migration instructions require the user manually remove the KubeDNS deployment. It would be great if Kops could do that automatically, but in the case of KubeDNS/CoreDNS I think knowing when it is safe to remove might add significant complexity. All of the CoreDNS pods need to be Ready, the replica count would need to match, etc. I think that might be too much to strive for with this migration. |
I am not sure what other options do we have for alternate flow other then replace (delete then create), but I can see some problems of doing that:
In the original comment where I was able to do this without service down time is to only replace the kube-dns service instead of all. But I am not sure if it is possible to implement this today in kops. In this case I will assume that we need to live with the fact that there can be a blip in this type of immutable or conflicting update. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@Ewocker: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@Ewocker: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Note:
During migration when kops is patching service kube-dns, duplicate port number might cause issue that only the first port has the correct targetPort name. In this case kubectl replace needs to be use to resolve the problem. See issue kubernetes/kubernetes#47249