Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flannel POD OOMKilled #963

Closed
jbpin opened this issue Mar 20, 2018 · 16 comments
Closed

Flannel POD OOMKilled #963

jbpin opened this issue Mar 20, 2018 · 16 comments
Labels

Comments

@jbpin
Copy link

jbpin commented Mar 20, 2018

Flannels pod have 600 restart on node only (not on master) due to OOMKilled.

Expected Behavior

No Restart

Possible Solution

Increase memory limit

Steps to Reproduce (for bugs)

install flannel using prodived yaml (Documentation/kube-flannel.yml)

resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"

Context

1 Master / 2 Node

Your Environment

  • Flannel version: v0.10.0-amd64
  • Backend used (e.g. vxlan or udp): vxlan
  • Etcd version: 3.1.11
  • Kubernetes version (if used): 1.9.4
  • Operating System and version: Ubuntu 16.04.4

What's the recommended memory limit for Flannel ?

@jbpin
Copy link
Author

jbpin commented Mar 22, 2018

We have updated resources limits and there are no more restart. So the question is :

What's the recommended memory limit for Flannel running in Kubernetes node's pod ?

@jansmets
Copy link

I'm having the same issue.

@kamrar
Copy link

kamrar commented Aug 25, 2018

same here

@Tapolsky
Copy link

Tapolsky commented Sep 5, 2018

The same issue for me. Fixed by setting 256Mi memory limit in flannel daemonSet properties

@ttarczynski
Copy link

ttarczynski commented Oct 22, 2018

I'm observing a very similar issue.

  • flannel version: v0.10.0
    • installed from https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
  • kubernetes version: v1.11.3
  • OS: CentOS 7.3.1611
  • kernel: 3.10.0-862.11.6.el7.x86_64
  • I've observed flannel getting OOMKilled
    • with original 50Mi memory limit
    • then increased the limit to 100M but it get OOMKilled again
  • my cluster is only 10 nodes

I didn't observe these issues with flannel v0.9.1 as it didn't have the memory limit set.

@tomdee do you think this issue may be caused by the memory limit being too low? (as advised by @Tapolsky above)


Here's the status of this OOMKilled pod:

# kubectl get pod  | egrep -vi running
NAME                                    READY     STATUS                   RESTARTS   AGE
kube-flannel-ds-mcg7n                   0/1       Init:RunContainerError   1          7d
Name:               kube-flannel-ds-mcg7n
Node:               node1/172.24.11.44
Start Time:         Mon, 15 Oct 2018 07:51:22 +0200
Status:             Running
Controlled By:      DaemonSet/kube-flannel-ds
Init Containers:
  install-cni:
    Container ID:  
    Image:         quay.io/coreos/flannel:v0.10.0-amd64
    Image ID:      
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Waiting
      Reason:       RunContainerError
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      containerd: container not started
      Exit Code:    128
      Started:      Sun, 21 Oct 2018 02:45:27 +0200
      Finished:     Sun, 21 Oct 2018 02:45:27 +0200
    Ready:          False
    Restart Count:  1

Containers:
  kube-flannel:
    Container ID:  docker://be496e7701c9087386e38932b6962b4d50db697cfd3074257cd315fe63cce50b
    Image:         quay.io/coreos/flannel:v0.10.0-amd64
    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
    Command:
      /opt/bin/flanneld
    Args:
      --ip-masq
      --kube-subnet-mgr
    State:          Terminated
      Reason:       OOMKilled
      Exit Code:    2
      Started:      Sun, 21 Oct 2018 02:40:19 +0200
      Finished:     Sun, 21 Oct 2018 02:40:19 +0200
    Ready:          False
    Restart Count:  13
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:     100m
      memory:  100Mi

Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 

Events:
  Type     Reason                  Age                  From                  Message
  ----     ------                  ----                 ----                  -------
  Normal   SandboxChanged          8m (x40462 over 5d)  kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  3m (x39973 over 1d)  kubelet, node1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-mcg7n": Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:286: decoding sync type from init pipe caused \\\"read parent: connection reset by peer\\\"\"\n"

And status on the node:

  • There seems to be 296 install-cni container instances and 14 kube-flannel instances:
[2018-10-22 08:33:23] # ls /var/lib/kubelet/pods/588b1245-d03e-11e8-b788-02010035514b/containers/install-cni/ | wc -l 
296
[2018-10-22 08:33:27] # ls /var/lib/kubelet/pods/588b1245-d03e-11e8-b788-02010035514b/containers/kube-flannel/ | wc -l 
14
  • oom-killer logs in dmesg:
Oct 22 08:29:07 node1 kernel: exe invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=-999
Oct 22 08:29:07 node1 kernel: Task in /kubepods/pod588b1245-d03e-11e8-b788-02010035514b/0673b08fed23d22ddb3e6eea674c9356371ca39f9abf1258c8bc7c032f49b50b killed as a result 
Oct 22 08:29:07 node1 kernel: memory: usage 102400kB, limit 102400kB, failcnt 3794299
Oct 22 08:29:07 node1 kernel: memory+swap: usage 102400kB, limit 9007199254740988kB, failcnt 0
Oct 22 08:29:07 node1 kernel: kmem: usage 102396kB, limit 9007199254740988kB, failcnt 0

@Dieken
Copy link
Contributor

Dieken commented Oct 24, 2018

@ttarczynski I guess recent releases of flannel consume more memory, flannel-0.10 changed a lot on vxlan backend. I also got this OOMKilled issue and I increased its memory limit from 50Mi to 100Mi. You are free to increase the memory limit, the original purpose in #855 is to explicitly set resource requests and limits to get QoS class "Guaranteed". But better not increase too much, it's fine to oom-kill flannel if it's abnormal, network connectivity won't be affected if flanneld restarts quickly.

@tomdee maybe it's time to increase the default memory limit? or better to profile why it consumes so much memory? My cluster has less than ten k8s worker nodes.

@ttarczynski
Copy link

@Dieken In my case I've already increased the limit from 50Mi to 100Mi and still got it OOMKilled.
But what's worse, after the OOMKill it was not able to start again. It displayed ContainerCannotRun for the init container install-cni.
I've observed this problem in my environment once per week and each time needed to delete flannel pods to fix this. This week I've increased the limit to 256Mi and will see if it helps.

My cluster is also only 10 nodes and I always see the memory/working_set metric values below 30 MiB.

@Dieken
Copy link
Contributor

Dieken commented Oct 24, 2018

@ttarczynski I double checked my clusters, luckily flannel pods didn't restart due to OOMKilled, rarely they restarted with error code 255, possibly due to some kube-apiserver error.

Because we both have small scale cluster, I doubt flanneld-0.10 introduced some defect.

BTW, you may try old version, I use k8s-1.8.13 + flannel-0.9.0 and k8s-1.9.8 + flannel-0.9.1, both with 100Mi memory limit for flannel, and actual memory usage below 30Mi when I just checked.

@chemist
Copy link

chemist commented Jan 18, 2019

The same problem. Rollback to 0.9.1 fix it.

@ttarczynski
Copy link

I think the problems I've seen were related to the default kernel version in CentOS 7 (kernel-3.10.0-*.el7).
It seems kubernetes v1.9 or higher with CentOS kernels can result in a "slab cache memory leak":

And probably this was causing flannel to be OOMKilled in my cluster.
I have upgraded to a newer kernel version a month ago and since then haven't seen the flannel OOMKilled issue again.

@shettyh
Copy link

shettyh commented Mar 9, 2019

I have upgraded the kernel to 4.4.176-1.el7.elrepo.x86_64 and upgraded docker to 18.09.3 with kubernetes 1.13 and flannel 0.11

But still i am facing OOMKilled with default memory limits.

Any know fixes for this ? Does increasing the memory limit actually solves this issue ?

Interestingly enough i am facing this issue only in server class machines!

More details:

Kubernetes details

image

System logs

image

@lubinsz
Copy link

lubinsz commented Jul 4, 2019

@shettyh

Maybe you can modify the memory limit, and test it.
Try this command:

kubectl patch ds -n=kube-system kube-flannel-ds-amd64 -p '{"spec": {"template":{"spec":{"containers": [{"name":"kube-flannel", "resources": {"limits": {"cpu": "250m","memory": "550Mi"},"requests": {"cpu": "100m","memory": "100Mi"}}}]}}}}'

@shettyh
Copy link

shettyh commented Jul 7, 2019

@lubinsz Yes increased the memory limit. Now it works fine

Thanks

@willzhang
Copy link

willzhang commented Nov 23, 2019

same problem
kubeclt -n kube-system describe pods kube-flannelxxx

Message:      OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:309: getting pipe fds for pid 1033 caused \"readlink /proc/1033/fd/0: no such file or directory\"": unknown
Events:
  Type     Reason                  Age                      From                 Message
  ----     ------                  ----                     ----                 -------
  Warning  FailedCreatePodSandBox  12m (x21991 over 11h)    kubelet, k8snode150  Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-cpbb6": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
  Warning  FailedCreatePodSandBox  7m49s (x8513 over 11h)   kubelet, k8snode150  Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-cpbb6": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:297: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown
  Normal   SandboxChanged          2m49s (x34367 over 39h)  kubelet, k8snode150  Pod sandbox changed, it will be killed and re-created.

go to that node
tail -f /var/log/messages

r"
Nov 23 19:57:00 localhost dockerd: time="2019-11-23T19:57:00.775995545+08:00" level=error msg="Handler for POST /v1.38/containers/73711438287b3845db8ffbd6d472e5223732c16f80f102ee168da470235d6770/start returned error: OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:301: running exec setns process for init caused \\\"signal: broken pipe\\\"\": unknown"
Nov 23 19:57:00 localhost kubelet: E1123 19:57:00.776463   20579 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:00 localhost kubelet: E1123 19:57:00.776574   20579 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:00 localhost kubelet: E1123 19:57:00.776608   20579 kuberuntime_manager.go:666] createPodSandbox for pod "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:00 localhost kubelet: E1123 19:57:00.776706   20579 pod_workers.go:190] Error syncing pod a4889fe0-0d27-11ea-99dc-0050569702dd ("kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)"), skipping: failed to "CreatePodSandbox" for "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-flannel-ds-amd64-7z789\": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:301: running exec setns process for init caused \\\"signal: broken pipe\\\"\": unknown"
Nov 23 19:57:00 localhost kubelet: W1123 19:57:00.777455   20579 container.go:409] Failed to create summary reader for "/kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/73711438287b3845db8ffbd6d472e5223732c16f80f102ee168da470235d6770": none of the resources are being tracked.
Nov 23 19:57:01 localhost kubelet: W1123 19:57:01.174208   20579 pod_container_deletor.go:75] Container "73711438287b3845db8ffbd6d472e5223732c16f80f102ee168da470235d6770" not found in pod's containers
Nov 23 19:57:01 localhost dockerd: time="2019-11-23T19:57:01+08:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/fc00c5a04ebf4b43cc07e0151b05ee09cc478f00d1a9a37c68492b278a9e96ad/shim.sock" debug=false pid=33416
Nov 23 19:57:01 localhost kernel: runc:[1:CHILD] invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=-998
Nov 23 19:57:01 localhost kernel: runc:[1:CHILD] cpuset=fc00c5a04ebf4b43cc07e0151b05ee09cc478f00d1a9a37c68492b278a9e96ad mems_allowed=0-1
Nov 23 19:57:01 localhost kernel: CPU: 25 PID: 33433 Comm: runc:[1:CHILD] Kdump: loaded Tainted: G        W      ------------ T 3.10.0-1062.el7.x86_64 #1
Nov 23 19:57:01 localhost kernel: Hardware name: LENOVO System x3650 M5: -[8871AC1]-/01DC328, BIOS -[TCE130J-2.40]- 04/11/2017
Nov 23 19:57:01 localhost kernel: Call Trace:
Nov 23 19:57:01 localhost kernel: [<ffffffffb5379262>] dump_stack+0x19/0x1b
Nov 23 19:57:01 localhost kernel: [<ffffffffb5373c04>] dump_header+0x90/0x229
Nov 23 19:57:01 localhost kernel: [<ffffffffb4f0825b>] ? cred_has_capability+0x6b/0x120
Nov 23 19:57:01 localhost kernel: [<ffffffffb4dedb1b>] ? do_wp_page+0xfb/0x720
Nov 23 19:57:01 localhost kernel: [<ffffffffb4dbfd74>] oom_kill_process+0x254/0x3e0
Nov 23 19:57:01 localhost kernel: [<ffffffffb4e3c666>] mem_cgroup_oom_synchronize+0x546/0x570
Nov 23 19:57:01 localhost kernel: [<ffffffffb4e3bae0>] ? mem_cgroup_charge_common+0xc0/0xc0
Nov 23 19:57:01 localhost kernel: [<ffffffffb4dc0614>] pagefault_out_of_memory+0x14/0x90
Nov 23 19:57:01 localhost kernel: [<ffffffffb53721b2>] mm_fault_error+0x6a/0x157
Nov 23 19:57:01 localhost kernel: [<ffffffffb53868b1>] __do_page_fault+0x491/0x500
Nov 23 19:57:01 localhost kernel: [<ffffffffb5386955>] do_page_fault+0x35/0x90
Nov 23 19:57:01 localhost kernel: [<ffffffffb5382768>] page_fault+0x28/0x30
Nov 23 19:57:01 localhost kernel: Task in /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/fc00c5a04ebf4b43cc07e0151b05ee09cc478f00d1a9a37c68492b278a9e96ad killed as a result of limit of /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd
Nov 23 19:57:01 localhost kernel: memory: usage 51200kB, limit 51200kB, failcnt 251087
Nov 23 19:57:01 localhost kernel: memory+swap: usage 51200kB, limit 9007199254740988kB, failcnt 0
Nov 23 19:57:01 localhost kernel: kmem: usage 51064kB, limit 9007199254740988kB, failcnt 0
Nov 23 19:57:01 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd: cache:4KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:4KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:01 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/e740f5e9c38080fcd11e7d3f1107b376dfbfc329d76f3db3580bbc7e60474390: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:01 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/aa42ccd424c5c40084938cec093d47c1179d83214dce1d9341854e63f5f59d29: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:01 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/fc00c5a04ebf4b43cc07e0151b05ee09cc478f00d1a9a37c68492b278a9e96ad: cache:0KB rss:36KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:01 localhost kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
Nov 23 19:57:01 localhost kernel: [33432]     0 33432     4599     1086      14        0          -998 runc:[0:PARENT]
Nov 23 19:57:01 localhost kernel: [33433]     0 33433     4599      596      14        0          -998 runc:[1:CHILD]
Nov 23 19:57:01 localhost kernel: Memory cgroup out of memory: Kill process 33433 (runc:[1:CHILD]) score 0 or sacrifice child
Nov 23 19:57:01 localhost kernel: Killed process 33433 (runc:[1:CHILD]), UID 0, total-vm:18396kB, anon-rss:2384kB, file-rss:0kB, shmem-rss:0kB
Nov 23 19:57:01 localhost dockerd: time="2019-11-23T19:57:01+08:00" level=info msg="shim reaped" id=fc00c5a04ebf4b43cc07e0151b05ee09cc478f00d1a9a37c68492b278a9e96ad
Nov 23 19:57:01 localhost dockerd: time="2019-11-23T19:57:01.698294061+08:00" level=error msg="stream copy error: reading from a closed fifo"
Nov 23 19:57:01 localhost dockerd: time="2019-11-23T19:57:01.698318601+08:00" level=error msg="stream copy error: reading from a closed fifo"
Nov 23 19:57:01 localhost dockerd: time="2019-11-23T19:57:01.867842722+08:00" level=error msg="fc00c5a04ebf4b43cc07e0151b05ee09cc478f00d1a9a37c68492b278a9e96ad cleanup: failed to delete container from containerd: no such container"
Nov 23 19:57:01 localhost dockerd: time="2019-11-23T19:57:01.867927799+08:00" level=error msg="Handler for POST /v1.38/containers/fc00c5a04ebf4b43cc07e0151b05ee09cc478f00d1a9a37c68492b278a9e96ad/start returned error: OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:301: running exec setns process for init caused \\\"signal: broken pipe\\\"\": unknown"
Nov 23 19:57:01 localhost kubelet: E1123 19:57:01.868413   20579 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:01 localhost kubelet: E1123 19:57:01.868483   20579 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:01 localhost kubelet: E1123 19:57:01.868571   20579 kuberuntime_manager.go:666] createPodSandbox for pod "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:01 localhost kubelet: E1123 19:57:01.868692   20579 pod_workers.go:190] Error syncing pod a4889fe0-0d27-11ea-99dc-0050569702dd ("kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)"), skipping: failed to "CreatePodSandbox" for "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-flannel-ds-amd64-7z789\": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:301: running exec setns process for init caused \\\"signal: broken pipe\\\"\": unknown"
Nov 23 19:57:01 localhost kubelet: W1123 19:57:01.869354   20579 container.go:409] Failed to create summary reader for "/kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/fc00c5a04ebf4b43cc07e0151b05ee09cc478f00d1a9a37c68492b278a9e96ad": none of the resources are being tracked.
Nov 23 19:57:02 localhost kubelet: W1123 19:57:02.293778   20579 pod_container_deletor.go:75] Container "fc00c5a04ebf4b43cc07e0151b05ee09cc478f00d1a9a37c68492b278a9e96ad" not found in pod's containers
Nov 23 19:57:02 localhost dockerd: time="2019-11-23T19:57:02+08:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/15bb45cd6143b061e8a3df9e033c197fb4789666d02e909154d6215d674b750e/shim.sock" debug=false pid=33436
Nov 23 19:57:02 localhost kernel: runc:[1:CHILD] invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=-998
Nov 23 19:57:02 localhost kernel: runc:[1:CHILD] cpuset=15bb45cd6143b061e8a3df9e033c197fb4789666d02e909154d6215d674b750e mems_allowed=0-1
Nov 23 19:57:02 localhost kernel: CPU: 37 PID: 33453 Comm: runc:[1:CHILD] Kdump: loaded Tainted: G        W      ------------ T 3.10.0-1062.el7.x86_64 #1
Nov 23 19:57:02 localhost kernel: Hardware name: LENOVO System x3650 M5: -[8871AC1]-/01DC328, BIOS -[TCE130J-2.40]- 04/11/2017
Nov 23 19:57:02 localhost kernel: Call Trace:
Nov 23 19:57:02 localhost kernel: [<ffffffffb5379262>] dump_stack+0x19/0x1b
Nov 23 19:57:02 localhost kernel: [<ffffffffb5373c04>] dump_header+0x90/0x229
Nov 23 19:57:02 localhost kernel: [<ffffffffb4f0825b>] ? cred_has_capability+0x6b/0x120
Nov 23 19:57:02 localhost kernel: [<ffffffffb4dedb1b>] ? do_wp_page+0xfb/0x720
Nov 23 19:57:02 localhost kernel: [<ffffffffb4dbfd74>] oom_kill_process+0x254/0x3e0
Nov 23 19:57:02 localhost kernel: [<ffffffffb4e3c666>] mem_cgroup_oom_synchronize+0x546/0x570
Nov 23 19:57:02 localhost kernel: [<ffffffffb4e3bae0>] ? mem_cgroup_charge_common+0xc0/0xc0
Nov 23 19:57:02 localhost kernel: [<ffffffffb4dc0614>] pagefault_out_of_memory+0x14/0x90
Nov 23 19:57:02 localhost kernel: [<ffffffffb53721b2>] mm_fault_error+0x6a/0x157
Nov 23 19:57:02 localhost kernel: [<ffffffffb53868b1>] __do_page_fault+0x491/0x500
Nov 23 19:57:02 localhost kernel: [<ffffffffb5386955>] do_page_fault+0x35/0x90
Nov 23 19:57:02 localhost kernel: [<ffffffffb5382768>] page_fault+0x28/0x30
Nov 23 19:57:02 localhost kernel: Task in /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/15bb45cd6143b061e8a3df9e033c197fb4789666d02e909154d6215d674b750e killed as a result of limit of /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd
Nov 23 19:57:02 localhost kernel: memory: usage 51200kB, limit 51200kB, failcnt 251101
Nov 23 19:57:02 localhost kernel: memory+swap: usage 51200kB, limit 9007199254740988kB, failcnt 0
Nov 23 19:57:02 localhost kernel: kmem: usage 51064kB, limit 9007199254740988kB, failcnt 0
Nov 23 19:57:02 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd: cache:4KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:4KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:02 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/e740f5e9c38080fcd11e7d3f1107b376dfbfc329d76f3db3580bbc7e60474390: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:02 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/aa42ccd424c5c40084938cec093d47c1179d83214dce1d9341854e63f5f59d29: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:02 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/15bb45cd6143b061e8a3df9e033c197fb4789666d02e909154d6215d674b750e: cache:0KB rss:36KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:02 localhost kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
Nov 23 19:57:02 localhost kernel: [33452]     0 33452     4599     1086      15        0          -998 runc:[0:PARENT]
Nov 23 19:57:02 localhost kernel: [33453]     0 33453     4599      596      14        0          -998 runc:[1:CHILD]
Nov 23 19:57:02 localhost kernel: Memory cgroup out of memory: Kill process 33453 (runc:[1:CHILD]) score 0 or sacrifice child
Nov 23 19:57:02 localhost kernel: Killed process 33453 (runc:[1:CHILD]), UID 0, total-vm:18396kB, anon-rss:2384kB, file-rss:0kB, shmem-rss:0kB
Nov 23 19:57:02 localhost dockerd: time="2019-11-23T19:57:02+08:00" level=info msg="shim reaped" id=15bb45cd6143b061e8a3df9e033c197fb4789666d02e909154d6215d674b750e
Nov 23 19:57:02 localhost dockerd: time="2019-11-23T19:57:02.883601306+08:00" level=error msg="stream copy error: reading from a closed fifo"
Nov 23 19:57:02 localhost dockerd: time="2019-11-23T19:57:02.883603500+08:00" level=error msg="stream copy error: reading from a closed fifo"
Nov 23 19:57:03 localhost dockerd: time="2019-11-23T19:57:03.068423480+08:00" level=error msg="15bb45cd6143b061e8a3df9e033c197fb4789666d02e909154d6215d674b750e cleanup: failed to delete container from containerd: no such container"
Nov 23 19:57:03 localhost dockerd: time="2019-11-23T19:57:03.068501443+08:00" level=error msg="Handler for POST /v1.38/containers/15bb45cd6143b061e8a3df9e033c197fb4789666d02e909154d6215d674b750e/start returned error: OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:301: running exec setns process for init caused \\\"signal: broken pipe\\\"\": unknown"
Nov 23 19:57:03 localhost kubelet: E1123 19:57:03.068973   20579 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:03 localhost kubelet: E1123 19:57:03.069056   20579 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:03 localhost kubelet: E1123 19:57:03.069087   20579 kuberuntime_manager.go:666] createPodSandbox for pod "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:03 localhost kubelet: E1123 19:57:03.069190   20579 pod_workers.go:190] Error syncing pod a4889fe0-0d27-11ea-99dc-0050569702dd ("kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)"), skipping: failed to "CreatePodSandbox" for "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-flannel-ds-amd64-7z789\": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:301: running exec setns process for init caused \\\"signal: broken pipe\\\"\": unknown"
Nov 23 19:57:03 localhost kubelet: W1123 19:57:03.069984   20579 container.go:409] Failed to create summary reader for "/kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/15bb45cd6143b061e8a3df9e033c197fb4789666d02e909154d6215d674b750e": none of the resources are being tracked.
Nov 23 19:57:03 localhost kubelet: W1123 19:57:03.413644   20579 pod_container_deletor.go:75] Container "15bb45cd6143b061e8a3df9e033c197fb4789666d02e909154d6215d674b750e" not found in pod's containers
Nov 23 19:57:03 localhost dockerd: time="2019-11-23T19:57:03+08:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/05c240ed3bc32b0b4f50d71d12cd5c6d60b83a1adb29479d7b873de441ae551b/shim.sock" debug=false pid=33459
Nov 23 19:57:03 localhost kernel: runc:[1:CHILD] invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=-998
Nov 23 19:57:03 localhost kernel: runc:[1:CHILD] cpuset=05c240ed3bc32b0b4f50d71d12cd5c6d60b83a1adb29479d7b873de441ae551b mems_allowed=0-1
Nov 23 19:57:03 localhost kernel: CPU: 37 PID: 33476 Comm: runc:[1:CHILD] Kdump: loaded Tainted: G        W      ------------ T 3.10.0-1062.el7.x86_64 #1
Nov 23 19:57:03 localhost kernel: Hardware name: LENOVO System x3650 M5: -[8871AC1]-/01DC328, BIOS -[TCE130J-2.40]- 04/11/2017
Nov 23 19:57:03 localhost kernel: Call Trace:
Nov 23 19:57:03 localhost kernel: [<ffffffffb5379262>] dump_stack+0x19/0x1b
Nov 23 19:57:03 localhost kernel: [<ffffffffb5373c04>] dump_header+0x90/0x229
Nov 23 19:57:03 localhost kernel: [<ffffffffb4f0825b>] ? cred_has_capability+0x6b/0x120
Nov 23 19:57:03 localhost kernel: [<ffffffffb4dedb1b>] ? do_wp_page+0xfb/0x720
Nov 23 19:57:03 localhost kernel: [<ffffffffb4dbfd74>] oom_kill_process+0x254/0x3e0
Nov 23 19:57:03 localhost kernel: [<ffffffffb4e3c666>] mem_cgroup_oom_synchronize+0x546/0x570
Nov 23 19:57:03 localhost kernel: [<ffffffffb4e3bae0>] ? mem_cgroup_charge_common+0xc0/0xc0
Nov 23 19:57:03 localhost kernel: [<ffffffffb4dc0614>] pagefault_out_of_memory+0x14/0x90
Nov 23 19:57:03 localhost kernel: [<ffffffffb53721b2>] mm_fault_error+0x6a/0x157
Nov 23 19:57:03 localhost kernel: [<ffffffffb53868b1>] __do_page_fault+0x491/0x500
Nov 23 19:57:03 localhost kernel: [<ffffffffb5386955>] do_page_fault+0x35/0x90
Nov 23 19:57:03 localhost kernel: [<ffffffffb5382768>] page_fault+0x28/0x30
Nov 23 19:57:03 localhost kernel: Task in /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/05c240ed3bc32b0b4f50d71d12cd5c6d60b83a1adb29479d7b873de441ae551b killed as a result of limit of /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd
Nov 23 19:57:03 localhost kernel: memory: usage 51200kB, limit 51200kB, failcnt 251114
Nov 23 19:57:03 localhost kernel: memory+swap: usage 51200kB, limit 9007199254740988kB, failcnt 0
Nov 23 19:57:03 localhost kernel: kmem: usage 51064kB, limit 9007199254740988kB, failcnt 0
Nov 23 19:57:03 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd: cache:4KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:4KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:03 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/e740f5e9c38080fcd11e7d3f1107b376dfbfc329d76f3db3580bbc7e60474390: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:03 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/aa42ccd424c5c40084938cec093d47c1179d83214dce1d9341854e63f5f59d29: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:03 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/05c240ed3bc32b0b4f50d71d12cd5c6d60b83a1adb29479d7b873de441ae551b: cache:0KB rss:36KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:03 localhost kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
Nov 23 19:57:03 localhost kernel: [33475]     0 33475     4599     1087      14        0          -998 runc:[0:PARENT]
Nov 23 19:57:03 localhost kernel: [33476]     0 33476     4599      597      14        0          -998 runc:[1:CHILD]
Nov 23 19:57:03 localhost kernel: Memory cgroup out of memory: Kill process 33476 (runc:[1:CHILD]) score 0 or sacrifice child
Nov 23 19:57:03 localhost kernel: Killed process 33476 (runc:[1:CHILD]), UID 0, total-vm:18396kB, anon-rss:2388kB, file-rss:0kB, shmem-rss:0kB
Nov 23 19:57:03 localhost dockerd: time="2019-11-23T19:57:03+08:00" level=info msg="shim reaped" id=05c240ed3bc32b0b4f50d71d12cd5c6d60b83a1adb29479d7b873de441ae551b
Nov 23 19:57:03 localhost dockerd: time="2019-11-23T19:57:03.987164282+08:00" level=error msg="stream copy error: reading from a closed fifo"
Nov 23 19:57:03 localhost dockerd: time="2019-11-23T19:57:03.987170888+08:00" level=error msg="stream copy error: reading from a closed fifo"
Nov 23 19:57:04 localhost dockerd: time="2019-11-23T19:57:04.168831286+08:00" level=error msg="05c240ed3bc32b0b4f50d71d12cd5c6d60b83a1adb29479d7b873de441ae551b cleanup: failed to delete container from containerd: no such container"
Nov 23 19:57:04 localhost dockerd: time="2019-11-23T19:57:04.168903523+08:00" level=error msg="Handler for POST /v1.38/containers/05c240ed3bc32b0b4f50d71d12cd5c6d60b83a1adb29479d7b873de441ae551b/start returned error: OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:301: running exec setns process for init caused \\\"signal: broken pipe\\\"\": unknown"
Nov 23 19:57:04 localhost kubelet: E1123 19:57:04.169442   20579 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:04 localhost kubelet: E1123 19:57:04.169507   20579 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:04 localhost kubelet: E1123 19:57:04.169569   20579 kuberuntime_manager.go:666] createPodSandbox for pod "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:04 localhost kubelet: E1123 19:57:04.169686   20579 pod_workers.go:190] Error syncing pod a4889fe0-0d27-11ea-99dc-0050569702dd ("kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)"), skipping: failed to "CreatePodSandbox" for "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-flannel-ds-amd64-7z789\": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:301: running exec setns process for init caused \\\"signal: broken pipe\\\"\": unknown"
Nov 23 19:57:04 localhost kubelet: W1123 19:57:04.170316   20579 container.go:409] Failed to create summary reader for "/kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/05c240ed3bc32b0b4f50d71d12cd5c6d60b83a1adb29479d7b873de441ae551b": none of the resources are being tracked.
Nov 23 19:57:04 localhost kubelet: W1123 19:57:04.533086   20579 pod_container_deletor.go:75] Container "05c240ed3bc32b0b4f50d71d12cd5c6d60b83a1adb29479d7b873de441ae551b" not found in pod's containers
Nov 23 19:57:05 localhost dockerd: time="2019-11-23T19:57:05+08:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/0a97dbe5aee2b80bccce267d31a801fdc13c53ae000dccd7f174f831542a73c2/shim.sock" debug=false pid=33479
Nov 23 19:57:05 localhost kernel: runc:[1:CHILD] invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=-998
Nov 23 19:57:05 localhost kernel: runc:[1:CHILD] cpuset=0a97dbe5aee2b80bccce267d31a801fdc13c53ae000dccd7f174f831542a73c2 mems_allowed=0-1
Nov 23 19:57:05 localhost kernel: CPU: 37 PID: 33496 Comm: runc:[1:CHILD] Kdump: loaded Tainted: G        W      ------------ T 3.10.0-1062.el7.x86_64 #1
Nov 23 19:57:05 localhost kernel: Hardware name: LENOVO System x3650 M5: -[8871AC1]-/01DC328, BIOS -[TCE130J-2.40]- 04/11/2017
Nov 23 19:57:05 localhost kernel: Call Trace:
Nov 23 19:57:05 localhost kernel: [<ffffffffb5379262>] dump_stack+0x19/0x1b
Nov 23 19:57:05 localhost kernel: [<ffffffffb5373c04>] dump_header+0x90/0x229
Nov 23 19:57:05 localhost kernel: [<ffffffffb4f0825b>] ? cred_has_capability+0x6b/0x120
Nov 23 19:57:05 localhost kernel: [<ffffffffb4dedb1b>] ? do_wp_page+0xfb/0x720
Nov 23 19:57:05 localhost kernel: [<ffffffffb4dbfd74>] oom_kill_process+0x254/0x3e0
Nov 23 19:57:05 localhost kernel: [<ffffffffb4e3c666>] mem_cgroup_oom_synchronize+0x546/0x570
Nov 23 19:57:05 localhost kernel: [<ffffffffb4e3bae0>] ? mem_cgroup_charge_common+0xc0/0xc0
Nov 23 19:57:05 localhost kernel: [<ffffffffb4dc0614>] pagefault_out_of_memory+0x14/0x90
Nov 23 19:57:05 localhost kernel: [<ffffffffb53721b2>] mm_fault_error+0x6a/0x157
Nov 23 19:57:05 localhost kernel: [<ffffffffb53868b1>] __do_page_fault+0x491/0x500
Nov 23 19:57:05 localhost kernel: [<ffffffffb5386955>] do_page_fault+0x35/0x90
Nov 23 19:57:05 localhost kernel: [<ffffffffb5382768>] page_fault+0x28/0x30
Nov 23 19:57:05 localhost kernel: Task in /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/0a97dbe5aee2b80bccce267d31a801fdc13c53ae000dccd7f174f831542a73c2 killed as a result of limit of /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd
Nov 23 19:57:05 localhost kernel: memory: usage 51200kB, limit 51200kB, failcnt 251127
Nov 23 19:57:05 localhost kernel: memory+swap: usage 51200kB, limit 9007199254740988kB, failcnt 0
Nov 23 19:57:05 localhost kernel: kmem: usage 51064kB, limit 9007199254740988kB, failcnt 0
Nov 23 19:57:05 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd: cache:4KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:4KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:05 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/e740f5e9c38080fcd11e7d3f1107b376dfbfc329d76f3db3580bbc7e60474390: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:05 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/aa42ccd424c5c40084938cec093d47c1179d83214dce1d9341854e63f5f59d29: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:05 localhost kernel: Memory cgroup stats for /kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/0a97dbe5aee2b80bccce267d31a801fdc13c53ae000dccd7f174f831542a73c2: cache:0KB rss:36KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:4KB inactive_file:0KB active_file:0KB unevictable:0KB
Nov 23 19:57:05 localhost kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
Nov 23 19:57:05 localhost kernel: [33495]     0 33495     4599     1086      12        0          -998 runc:[0:PARENT]
Nov 23 19:57:05 localhost kernel: [33496]     0 33496     4599      596      12        0          -998 runc:[1:CHILD]
Nov 23 19:57:05 localhost kernel: Memory cgroup out of memory: Kill process 33496 (runc:[1:CHILD]) score 0 or sacrifice child
Nov 23 19:57:05 localhost kernel: Killed process 33496 (runc:[1:CHILD]), UID 0, total-vm:18396kB, anon-rss:2384kB, file-rss:0kB, shmem-rss:0kB
Nov 23 19:57:05 localhost dockerd: time="2019-11-23T19:57:05+08:00" level=info msg="shim reaped" id=0a97dbe5aee2b80bccce267d31a801fdc13c53ae000dccd7f174f831542a73c2
Nov 23 19:57:05 localhost dockerd: time="2019-11-23T19:57:05.188987233+08:00" level=error msg="stream copy error: reading from a closed fifo"
Nov 23 19:57:05 localhost dockerd: time="2019-11-23T19:57:05.188991095+08:00" level=error msg="stream copy error: reading from a closed fifo"
Nov 23 19:57:05 localhost dockerd: time="2019-11-23T19:57:05.444555084+08:00" level=error msg="0a97dbe5aee2b80bccce267d31a801fdc13c53ae000dccd7f174f831542a73c2 cleanup: failed to delete container from containerd: no such container"
Nov 23 19:57:05 localhost dockerd: time="2019-11-23T19:57:05.444625884+08:00" level=error msg="Handler for POST /v1.38/containers/0a97dbe5aee2b80bccce267d31a801fdc13c53ae000dccd7f174f831542a73c2/start returned error: OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:301: running exec setns process for init caused \\\"signal: broken pipe\\\"\": unknown"
Nov 23 19:57:05 localhost kubelet: E1123 19:57:05.445176   20579 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:05 localhost kubelet: E1123 19:57:05.445251   20579 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:05 localhost kubelet: E1123 19:57:05.445283   20579 kuberuntime_manager.go:666] createPodSandbox for pod "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-7z789": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
Nov 23 19:57:05 localhost kubelet: E1123 19:57:05.445378   20579 pod_workers.go:190] Error syncing pod a4889fe0-0d27-11ea-99dc-0050569702dd ("kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)"), skipping: failed to "CreatePodSandbox" for "kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-flannel-ds-amd64-7z789_kube-system(a4889fe0-0d27-11ea-99dc-0050569702dd)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-flannel-ds-amd64-7z789\": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:301: running exec setns process for init caused \\\"signal: broken pipe\\\"\": unknown"
Nov 23 19:57:05 localhost kubelet: W1123 19:57:05.446210   20579 container.go:409] Failed to create summary reader for "/kubepods/poda4889fe0-0d27-11ea-99dc-0050569702dd/0a97dbe5aee2b80bccce267d31a801fdc13c53ae000dccd7f174f831542a73c2": none of the resources are being tracked.
Nov 23 19:57:05 localhost kubelet: W1123 19:57:05.665022   20579 pod_container_deletor.go:75] Container "0a97dbe5aee2b80bccce267d31a801fdc13c53ae000dccd7f174f831542a73c2" not found in pod's containers

have change limit, Need further observation

@stale
Copy link

stale bot commented Jan 26, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jan 26, 2023
@stale stale bot closed this as completed Feb 16, 2023
@LUJUHUI
Copy link

LUJUHUI commented Dec 22, 2023

The same issue for me. Fixed by setting 256Mi memory limit in flannel daemonSet properties

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests