-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update CoreDNS to v1.12 to fix OOM & restart #1037
Comments
There is a known issue on Ubuntu, where kubeadm sets up CoreDNS (and also kube-dns) incorrectly. The fixes are to update kubelet to use the correct |
FYI, the kubelet flag is |
To be more correct, its kubelet that is set up incorrectly, not coredns/kubedns directly. Next version of CoreDNS will be able to detect this misconfiguration and put warnings/errors in the logs. But thats not a fix. It just makes the failure less mysterious. Not sure if its up to kubeadm to detect use of |
hi @liheyuan ,
what are the contents of the file when you start
|
Ah... I didn't know about https://github.com/kubernetes/kubernetes#64665. Good to know! |
@chrisohaver Thanks for your reply ,I'll have a try Also @chrisohaver @neolit123 I try to modify the core dns's Pod define, increase memory limit from 170MB(default) to 256MB, and it works like a charm... May be this is another solution. |
thanks for finding that. @chrisohaver |
No - in fact, the CoreDNS manifests don't have a memory cap defined by default. So I don't know where the cap was introduced. Possibly in this cluster, |
@liheyuan thanks for noticing the low memory cap. By any chance, did you add the initial 170 memory limit to the coredns deployment, or perhaps add a container memory limit to the |
@chrisohaver I'm not sure, the 170MB limit was found when I export coredns's yaml using kubectl |
I'm also using kubeadm to launch a local kube cluster and am running into the same issue. I also have the 170Mi cap in the yaml for the coredns deployment. I can't seem to get it workingm unlike @liheyuan. After i kubeadm init I see nothing related to systemd-resolved @neolit123, am I doing anything wrong? I have the most recent version of kubeadm. |
@asipser what did you set memory cap to? |
@neolit123, Would kubeadm set up memory caps in a cluster by default? E.g. in the kube-system namespace, or directly in the coredns deployment? |
@neolit123, sorry It was just brought to my attention that there is a hard coded memory limit (that is too small) in the deployment in kubernetes repo. It's not in the coredns deployment, which is where i looked earlier. I'm not 100% clear on the reasoning for adding it in kubernetes repo copy of it. I believe it was copied from the kube-dns settings. We're updating that now... |
I got it working by starting up the system-resolved service, which updated my /etc/resolv.conf properly. Even with the 170Mi cap I could get coredns working. Thanks anyways @chrisohaver. |
@asipser Glad its working for you. Take care that system-resolved hasnt put local address 127.0.0.53 in /etc/resolv.conf ... that will cause problems for upstream lookups. |
@chrisohaver sorry for not looking earlier. |
@liheyuan, I'm trying to understand the root cause of this issue better. If you don't mind sharing, do you happen to know what DNS QPS rates your cluster is exhibiting? Under high load, coredns can use more memory. |
@chrisohaver Sorry for my late reply. I'm setting the k8s cluster as an test env, so the QPS of dns is very low, around ~2 / sec. |
Hey folks, do we have a canonical repo setup? I'm seeing a lot of anecdotal details, but not a 100% consistent reproducer... |
fixed in the latest coredns as outlined in: |
@liheyuan how often is CoreDNS OOM restarting. If we assume the root cause was the recently fixed cache issue: At your cluster's 2 QPS (as you say above), it would take at minimum about 24 hours for cache to exaust... and even then, only if every query made is unique (~230000 unique dns names), which is extremely unusual. |
I'm reopening as we need a PR to update the CoreDNS image version to 1.2.2 and PR to update the image in gcr.io |
Yes, @timothysc I will be pushing the PR once the CoreDNS image is available in gcr.io |
xref - kubernetes/kubernetes#68020 |
@timothysc you mean update CoreDNS to v1.2.2? |
@timothysc, This issue is in a test environment with 2 QPS. I really don't think it's related to the cache issue fixed in CoreDNS 1.2.2 at all (which requires high QPS to manifest). This could instead be a case of kubernetes/kubernetes#64665 failing to detect systemd-resolved, and adjust kubelet flags... or perhaps systemd-resolved failed and left the system in a bad state (e.g. /etc/resolv.conf still contains local address, but systemd-resolved isn't running). kubernetes/kubernetes#64665 checks to see if systemd-resolved is running. if it isnt, it assumes /etc/resolv.conf is OK. However, I see a comment on stack exchange (albeit old) about how to disable systemd-resolved which suggest that simply disabling the service leaves /etc/resolv.conf in a bad state. |
@chrisohaver Just as the first topic, It keeps restart, which means it crash -> restart , when I use dns to ping a cluster service , it crash again. No DNS query, no crash. After a query, it then crash. |
@liheyuan, This behavior lines up with infinite recursion caused by a local address present in Please check the following...
|
@timothysc fyi CoreDNS version 1.2.2 is now available on gcr.io |
nameserver 183.60.83.19
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni
nope I have also check DNS Pod's resolv.conf, it's also nameserver 183.60.83.19 BTW, |
@liheyuan, Generally loops are caused by forwarding paths, and your /etc/resolv.conf shows that there are no self loops there. It is possible that your upstreams are configured to forward back to CoreDNS, although this would be very unlikely (because there would not be a practical reason for doing so). The other possibility is if your CoreDNS configmap is configured to forward to itself. But this is also not likely, because it's not the default configuration. If you care to troubleshoot further, you can enable logging in coredns, by adding The latest image of CoreDNS (1.2.1), also has a loop detection plugin which you can enable by adding
|
@neolit123 Can we set the CoreDNS version(expect modifying the hard-coded CoreDNS version), when we run |
@xlgao-zju as outlined here we have a bit of an issue with allowing only the custom coredns image/version: |
We're going to close this issue, but folks can rally on config overrides on a different issue. |
@liheyuan, is your issue resolved? |
@chrisohaver Hi. I'm facing same issue with coredns(loop restarts) and I see the issue is due to the memory limit of 170Mi. Can you suggest as to how I can update my coredns deployment to 1.2.2 or how to increase the memory limit of coredns deployment. I am using k8s version-1.11.2. |
Sometimes this is the reason, but not always. Continuous Pod restarts can be caused by any error that causes a container in a Pod to exit (e.g. by crash, or by fatal error, or by being killed by another process). You can edit the
|
@chrisohaver Thanks a lot :) It worked. core dns pods are consuming close to 478Mi. So, it worked with memory limit=512Mi |
Another way to update the coredns version and raise the memory limit:
|
I have kubeadm v1.12.0 on debian 9 and solved this issue by switching from calico to weave. |
i was facing the same issue since last 2 days, i created 6 vms to resolve this issue. :) . i am posting complete command list to create kubeadm cluster - just follow this curl -sL https://gist.githubusercontent.com/alexellis/7315e75635623667c32199368aa11e95/raw/b025dfb91b43ea9309ce6ed67e24790ba65d7b67/kube.sh | sudo sh sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.1.1.5 --kubernetes-version stable { You must replace --apiserver-advertise-address with the IP of your master host) sudo useradd kubeadmin -G sudo -m -s /bin/bash sudo passwd kubeadmin sudo su kubeadmin cd $HOME sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown export KUBECONFIG=$HOME/admin.conf echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl taint nodes --all node-role.kubernetes.io/master- kubectl get all --namespace=kube-system please try above commands to create your cluster ..let me know if this is working for you. |
BUG REPORT
Versions
kubeadm version (use
kubeadm version
):kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release): Ubuntu 16.04 LTS X64
Kernel (e.g.
uname -a
): 4.4.0-91-generic Kubeadm should run kube-proxy under its' own identity #114-Ubuntu SMPOthers:
What happened?
core dns keep oom & restart, other pod works fine
get pod status
NAMESPACE NAME READY STATUS RESTARTS AGE
....
kube-system coredns-78fcdf6894-ls2q4 0/1 CrashLoopBackOff 12 1h
kube-system coredns-78fcdf6894-xn75c 0/1 CrashLoopBackOff 12 1h
....
describ the pod
Name: coredns-78fcdf6894-ls2q4
Namespace: kube-system
Priority: 0
PriorityClassName:
Node: k8s1/172.21.0.8
Start Time: Tue, 07 Aug 2018 11:59:37 +0800
Labels: k8s-app=kube-dns
pod-template-hash=3497892450
Annotations: cni.projectcalico.org/podIP=192.168.0.7/32
Status: Running
IP: 192.168.0.7
Controlled By: ReplicaSet/coredns-78fcdf6894
Containers:
coredns:
Container ID: docker://519046f837c93439a77d75288e6d630cdbcefe875b0bdb6aa5409d566070ec03
Image: k8s.gcr.io/coredns:1.1.3
Image ID: docker-pullable://k8s.gcr.io/coredns@sha256:db2bf53126ed1c761d5a41f24a1b82a461c85f736ff6e90542e9522be4757848
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Tue, 07 Aug 2018 13:07:21 +0800
Finished: Tue, 07 Aug 2018 13:08:21 +0800
Ready: False
Restart Count: 12
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Environment:
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-tsv2g (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-tsv2g:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-tsv2g
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Warning Unhealthy 44m kubelet, k8s1 Liveness probe failed: Get http://192.168.0.7:8080/health: dial tcp 192.168.0.7:8080: connect: connection refused
Normal Pulled 41m (x5 over 1h) kubelet, k8s1 Container image "k8s.gcr.io/coredns:1.1.3" already present on machine
Normal Created 41m (x5 over 1h) kubelet, k8s1 Created container
Normal Started 41m (x5 over 1h) kubelet, k8s1 Started container
Warning Unhealthy 40m kubelet, k8s1 Liveness probe failed: Get http://192.168.0.7:8080/health: read tcp 172.21.0.8:40972->192.168.0.7:8080: read: connection reset by peer
Warning Unhealthy 34m (x2 over 38m) kubelet, k8s1 Liveness probe failed: Get http://192.168.0.7:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning BackOff 4m (x124 over 44m) kubelet, k8s1 Back-off restarting failed container
logs of pod
.:53
CoreDNS-1.1.3
linux/amd64, go1.10.1, b0fd575c
2018/08/07 05:13:27 [INFO] CoreDNS-1.1.3
2018/08/07 05:13:27 [INFO] linux/amd64, go1.10.1, b0fd575c
2018/08/07 05:13:27 [INFO] plugin/reload: Running configuration MD5 = 2a066f12ec80aeb2b92740dd74c17138
ram usage of master
Mem: 1872 711 365 8 795 960
Swap: 0 0 0
ram usage of slave
Mem: 1872 392 78 17 1400 1250
Swap: 0 0 0
What you expected to happen?
core dns keep working and not restart
How to reproduce it (as minimally and precisely as possible)?
kubeadm init --apiserver-advertise-address=10.4.96.3 --pod-network-cidr=192.168.0.0/16
use calico network mode
join on second slave machine
node status is ready for both
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s1 Ready master 1h v1.11.1
k8s2 Ready 1h v1.11.1
Anything else we need to know?
I'm doing test on host with 2GB RAM, not sure if it is too small for k8s
The text was updated successfully, but these errors were encountered: