-
Notifications
You must be signed in to change notification settings - Fork 6.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix issue 6294 - "All pods simultaneously restart during worker scaling" #6520
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Hi @holmesb. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Can you use a wildcard.domain in the no_proxy config? That won't change when scaling thus no restart |
no_proxy contains nodes' single-label name ( Regardless, the wildcard domain should be in no_proxy, but I think this is a seperate issue. |
I was thinking you override the no_proxy var with a wildcard value and that's it. Just reconfig and no code changes. But I might be completely erong... |
In addition to the single-label names assumption, the generated no_proxy contains IPs too: So you'd have to hope connections aren't established to these as well. If they are, they'll go via the proxy and fail. In which case, your suggestion would mean proxy users end-up micro-managing their no_proxy var to keep single-label name and IP connections working. This PR is non-breaking and changes nothing unless the user opts-in (changes We've scaled our production cluster without downtime with this fix. |
some markdown issues have been flagged by the CI:
|
You don't add a commit to fix an unmerged commit. Amend and force push instead |
No markdown erros now @EppO, but is there any option now to correct the author of the first commit? Or should I create a fresh PR? |
you can rebase your branch on top of master and change the commit message/author. In your branch, run these commands:
this will show you the list of commits, select reword for the first commit at the top and then squash or fixup for the second one. |
aa9e15d
to
4750b06
Compare
author info is still messed up:
you can now just do git commit --amend --author="your name " now and git push force again |
4750b06
to
6d0f309
Compare
Thanks @EppO, hopefully fixed the author\committer now. CI failed because |
CI issue indeed, I've retried the job. |
/lgtm |
Fine with the change, but can't we have something simpler ? |
6d0f309
to
2fe8580
Compare
I've removed the second for loop @floryut |
Do we need to give the CI another kick @floryut ? |
…e no_proxy variable. This prevents docker engine restarting when scaling workers. Signed-off-by: holmesb <[email protected]>
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Atoms, holmesb The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
Looks like you add a file with weird line (windows?) Line endings. Can you please fix that? |
…ster * 'master' of github.com:kubernetes-sigs/kubespray: (1632 commits) Chmod kubeconfig to avoid group-readable (kubernetes-sigs#6800) Update bunch of dependencies (kubernetes-sigs#6801) If no_proxy_exclude_workers is true, workers will be excluded from the no_proxy variable. This prevents docker engine restarting when scaling workers. (kubernetes-sigs#6520) crio: ensure service is started and enabled (kubernetes-sigs#6753) Do not install etcd and etcdctl on master with scale.yml playbook. (kubernetes-sigs#6798) Fix csi-snapshotter timeout option. Fix ebs-external-attacher-role ClusterRole. (kubernetes-sigs#6776) Fix cinder & external_openstack cacert deployment (kubernetes-sigs#6745) Added Comment line above checksum section to add clarification about Kubespray's version support and testing (kubernetes-sigs#6785) Update nginx ingress controller to 0.40.1 (kubernetes-sigs#6786) Use v2.14.1 as base image for CI (kubernetes-sigs#6773) Add oomichi to reviewers (kubernetes-sigs#6796) Update triage/support label references to kind/support (kubernetes-sigs#6792) Update kube-router to 1.1.0 (kubernetes-sigs#6793) harden reset to work in more cases (kubernetes-sigs#6781) Add extra arguments variables for openstack and vsphere cloud controller manager daemonsets (kubernetes-sigs#6783) Update cilium with minor fix for CVE (kubernetes-sigs#6784) Add `plugins/mitogen` to `.gitignore` (kubernetes-sigs#6774) Remove arch from flannel image tag (kubernetes-sigs#6765) nginx ingress: fix yaml for multiple nodeselectors (kubernetes-sigs#6768) Forgotten debian10 test during nightly tests (kubernetes-sigs#6769) ...
* 'master' of https://github.com/kubernetes-sigs/kubespray: change owner to root for bin_dir directory (kubernetes-sigs#6814) Modify imagepullpolicy (kubernetes-sigs#6816) fix: add tags for set facts nodelocaldns (kubernetes-sigs#6813) Make reset work for crio (kubernetes-sigs#6812) Added option to force apiserver and respective client certificate to … (kubernetes-sigs#6403) cleanup kubelet_deployment_type (kubernetes-sigs#6815) allow pre-existing floating IPs to be specified with k8s_master_fips (kubernetes-sigs#6755) Fix line-spacing in no_proxy.yml (kubernetes-sigs#6810) Fix handler naming issue for Kubeadm | kubelet (kubernetes-sigs#6803) Disable dashboard by default (kubernetes-sigs#6804) Chmod kubeconfig to avoid group-readable (kubernetes-sigs#6800) Update bunch of dependencies (kubernetes-sigs#6801) If no_proxy_exclude_workers is true, workers will be excluded from the no_proxy variable. This prevents docker engine restarting when scaling workers. (kubernetes-sigs#6520)
…e no_proxy variable. This prevents docker engine restarting when scaling workers. (kubernetes-sigs#6520) Signed-off-by: holmesb <[email protected]>
If no_proxy_exclude_workers is true, workers will be excluded from the no_proxy variable. This prevents docker engine restarting when scaling workers.
/kind bug
What this PR does / why we need it:
See issue #6294
Which issue(s) this PR fixes:
Fixes #6294
Does this PR introduce a user-facing change?: