Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k3s causes a high load average #294

Closed
drym3r opened this issue Mar 31, 2019 · 71 comments
Closed

k3s causes a high load average #294

drym3r opened this issue Mar 31, 2019 · 71 comments

Comments

@drym3r
Copy link

drym3r commented Mar 31, 2019

Describe the bug
I'm not sure if it's a bug, but I think it's not an expected behaviour. When running k3s on any computer, it causes a very high load average. To have a concrete example, I'll explain the situation of my raspberry pi3 node.

When running k3s, I have a load average usage of:

load average: 2.69, 1.52, 1.79

Without running it, but still having the containers up, I have a load average of:

load average: 0.24, 1.01, 1.72

To Reproduce
I just run it without any special arguments, just how is installed by the sh installer.

Expected behavior
The load average should be under 1.

@ibuildthecloud
Copy link
Contributor

@drym3r is this a fresh install? Do you any thing deployed in kubernetes? Do you you see what processes are taking CPU?

@seabrookmx
Copy link

I'm seeing the same behavior on my laptop (vanilla Ubuntu 18.04 - i5-3427u CPU).
Fresh install of k3s with nothing running on it.
htop shows "k3s server" hovering between 20 and 30% CPU usage.
containerd, traefik and friends all seem to stay below 1%.
stdout from k3s only shows the startup sequence and it doesn't log any errors or anything after that.

@drym3r
Copy link
Author

drym3r commented Apr 6, 2019

@ibuildthecloud Yes, fresh install. I have things installed right now, but I saw this behaviour when I didn't. There's nothing taking CPU much, similarly to @seabrookmx, I only see a high load average. The load average is also afected by IO from disks, but I also doesn't have detect any activity on that.

@benmur
Copy link

benmur commented May 28, 2019

Seeing the same kind of k3s-server CPU usage (continuous 25-30%) on centos 7.6, vanilla k3s install, otherwise idle server.

Only resources deployed is kubevirt 0.17, but nothing actually deployed for kubevirt to manage. Not sure if it's related but I am also seeing #463 at the same time.

@runningman84
Copy link

My cpu load of a small single node cluster is also quite high:

top - 15:19:19 up 20 days,  8:12,  1 user,  load average: 1.92, 2.02, 2.31
Tasks: 351 total,   1 running, 265 sleeping,   0 stopped,   0 zombie
%Cpu(s): 25.7 us, 15.1 sy,  0.1 ni, 58.0 id,  0.2 wa,  0.0 hi,  0.9 si,  0.0 st
KiB Mem :  7970176 total,   344068 free,  4440344 used,  3185764 buff/cache
KiB Swap:        0 total,        0 free,        0 used.  4007944 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                             
 1136 root      20   0  944976 718932  72760 S 130.4  9.0  21602:46 k3s-server                                                                                                          
 1670 root      20   0  286068 136560  64308 S   6.9  1.7   1533:51 containerd                                                                                                          
25288 phil      20   0 1554952 341296  16248 S   5.3  4.3 283:18.57 prometheus                                                                                                          
 3849 root      20   0   41136  23376  13208 S   3.0  0.3   1455:01 speaker                                                                                                             
 5651 root      20   0  212384 121944  22672 S   3.0  1.5  48:28.77 fluxd                                                                                                               
 2887 root      20   0  142912  30660  18784 S   1.7  0.4 387:46.37 coredns                                                                                                             
 3558 nobody    20   0  175916 146508  18468 S   1.7  1.8   1586:37 tiller                                                                                                              
18419 root      20   0   73516  33312  14776 S   1.7  0.4  97:04.46 helm-operator                                                                                                       
11822 999       20   0 1007744  43224   3952 S   1.0  0.5 222:55.45 mongod                                                                                                              
21244 phil      20   0 7533224 1.320g  18364 S   1.0 17.4 555:32.21 java                                                                                                                
31635 root      20   0   78648  13656   2948 S   1.0  0.2   6:44.81 python3                                                                                                             
 2655 phil      20   0   43052   4268   3408 R   0.7  0.1   0:01.13 top                                                                                                                 
 9122 root      20   0  344164 119080  11864 S   0.7  1.5  68:48.69 hass                                                                                                                
    8 root      20   0       0      0      0 I   0.3  0.0  92:27.44 rcu_sched                                                                                                           
   28 root      20   0       0      0      0 S   0.3  0.0   8:50.50 ksoftirqd/3                                                                                                         
  585 phil      20   0  123160  22336   5232 S   0.3  0.3  23:32.13 alertmanager                                                                                                        
 1290 root      20   0       0      0      0 I   0.3  0.0   0:00.39 kworker/u8:3              

k3s manages to produce more load than the elasticsearch process and other resource intensive workloads.

@runningman84
Copy link

v0.6.0-rc5 also causes a high cpu load...

@pascalw
Copy link

pascalw commented Jul 10, 2019

I'm seeing about 20-25% cpu usage of a single Skylake Xeon core constantly. Is this expected behaviour? With https://microk8s.io it's only a couple percent.

@runningman84
Copy link

v0.7.0-rc5 also causes a high cpu load...

@ibuildthecloud are you guys going to tackle this issue anytime soon?

@drym3r
Copy link
Author

drym3r commented Jul 22, 2019

I'm using the last stable version and still see a too high load average, but it 's a little bit better now. I'm between 0.82, 0.93, 0.86 and 1.21, 0.93, 1.30. Even when under 1 it still has too little traffic that justifies it, but at least is not blocking any more.

@pascalw
Copy link

pascalw commented Jul 23, 2019

What kind of load avg, or idle CPU usage, would be considered normal?

@jrcichra
Copy link

I'm getting a similar higher load average on my Raspberry Pi cluster.

@erikwilson
Copy link
Contributor

Without a way to reproduce is hard to fix, any info that can be given about OS version, hardware, and K3S version is greatly appreciated.

I am not too worried about 10-30% usage but 100%+ is not good. If there is anything helpful in the logs sharing that would be greatly appreciated. It would be good to obtain some profiling data from k3s, we may need to make a special compilation to help with that: https://github.com/golang/go/wiki/Performance

@creinig
Copy link

creinig commented Aug 2, 2019

Some details from me:

  • k3s version v0.5.0 (8c0116d)
  • development cluster with currently 2 nodes
    • contains one application that's completely idle in the examined timespan
    • contains a rook installation (mostly idle as well)
  • Host OS: ubuntu 18.04 LTS
  • Host Hardware: Xeon E5 with 4 physical / 8 effective cores, 96G RAM (master) / 64G (agent)
  • load on master: 1-2, according to htop completely dominated by "k3s server"
  • load on agent: about 0.8, dominated by a kafka installation that's also on the machine. "k3s agent" contributes very little to the load.
  • strace attached on the "k3s server" process produced very little (only occasional futex() calls and SIGCHLD + rt_sigreturn() combos)
    • strace cmdline: strace -r -C -o k3s.strace.txt -p <PID>

Excerpt from syslog on master:

Aug  2 10:36:20 k8smaster k3s[41722]: E0802 10:36:20.533869   41722 kubelet_volumes.go:154] Orphaned pod "afa0f060-b498-11e9-8a40-000af74e3b38" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
Aug  2 10:36:21 k8smaster k3s[41722]: E0802 10:36:21.502906   41722 watcher.go:208] watch chan error: EOF
Aug  2 10:36:22 k8smaster k3s[41722]: E0802 10:36:22.532124   41722 kubelet_volumes.go:154] Orphaned pod "afa0f060-b498-11e9-8a40-000af74e3b38" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
Aug  2 10:36:24 k8smaster k3s[41722]: E0802 10:36:24.532394   41722 kubelet_volumes.go:154] Orphaned pod "afa0f060-b498-11e9-8a40-000af74e3b38" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
Aug  2 10:36:25 k8smaster k3s[41722]: E0802 10:36:25.511113   41722 watcher.go:208] watch chan error: EOF
Aug  2 10:36:25 k8smaster k3s[41722]: E0802 10:36:25.594629   41722 watcher.go:208] watch chan error: EOF
Aug  2 10:36:26 k8smaster k3s[41722]: E0802 10:36:26.182178   41722 watcher.go:208] watch chan error: EOF
Aug  2 10:36:26 k8smaster k3s[41722]: E0802 10:36:26.538714   41722 kubelet_volumes.go:154] Orphaned pod "afa0f060-b498-11e9-8a40-000af74e3b38" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
Aug  2 10:36:27 k8smaster k3s[41722]: E0802 10:36:27.138981   41722 watcher.go:208] watch chan error: EOF
Aug  2 10:36:28 k8smaster k3s[41722]: E0802 10:36:28.532382   41722 kubelet_volumes.go:154] Orphaned pod "afa0f060-b498-11e9-8a40-000af74e3b38" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
Aug  2 10:36:31 k8smaster k3s[41722]: E0802 10:36:30.533515   41722 kubelet_volumes.go:154] Orphaned pod "afa0f060-b498-11e9-8a40-000af74e3b38" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
Aug  2 10:36:31 k8smaster k3s[41722]: E0802 10:36:30.555398   41722 watcher.go:208] watch chan error: EOF

The "orphaned pod" messages refer to kubernetes/kubernetes#60987 . Removing the orphans doesn't seem to have any effect on the load.

The "EOF" messages continue to be logged.

Aside from these two I can see no other log messages from k3s during normal (idle) operation.

@erikwilson
Copy link
Contributor

The EOF messages should not be causing an issue. It is fixed in later versions of k3s, the fact that versions like v0.7.0-rc5 still has this issue is concerning.

Does the CPU spike right away or does it take time to build up?

What are the ulimits of the k3s server process?

What is the iostat of the k3s server process?

What services/things are installed that might be using the k8s API? And does disabling that thing reduce CPU consumption?

@creinig
Copy link

creinig commented Aug 5, 2019

Regarding CPU & I/O, here's a grafana screenshot of the two servers: https://drive.google.com/open?id=17CJi95dO98z9bW6oMJtiqiym1yEuIQ5i

"beryllium" ist the k3s server host, "protactinium" is the agent with an additional kafka broker installed. The curves are pretty representative for the behavior all day, including shortly after a restart.
There is nothing installed using the k8s API that I know of, aside from the rook (ceph) deployment inside the cluster, which is idle as well.

ulimit of "k3s server":

RESOURCE   DESCRIPTION                             SOFT      HARD UNITS
AS         address space limit                unlimited unlimited bytes
CORE       max core file size                 unlimited unlimited bytes
CPU        CPU time                           unlimited unlimited seconds
DATA       max data size                      unlimited unlimited bytes
FSIZE      max file size                      unlimited unlimited bytes
LOCKS      max number of file locks held      unlimited unlimited locks
MEMLOCK    max locked-in-memory address space  16777216  16777216 bytes
MSGQUEUE   max bytes in POSIX mqueues            819200    819200 bytes
NICE       max nice prio allowed to raise             0         0
NOFILE     max number of open files             1000000   1000000 files
NPROC      max number of processes            unlimited unlimited processes
RSS        max resident set size              unlimited unlimited bytes
RTPRIO     max real-time priority                     0         0
RTTIME     timeout for real-time tasks        unlimited unlimited microsecs
SIGPENDING max number of pending signals         386264    386264 signals
STACK      max stack size                       8388608 unlimited bytes

@derekrprice
Copy link

Does this have anything to do with kubernetes/kubernetes#64137? That has to do with orphaned systemd cgroup watches causing kubelet CPU usage to gradually ramp up until a node dies. Several threads attribute this to a bad interaction between certain kernel versions and certain systemd versions. It's particularly exacerbated by cron jobs since they can have such a short lifecycle. Anyhow, if this is the same thing, then I posted a DaemonSet that you can use as a workaround on that ticket until someone gets around to fixing the kernel or systemd or whatever.

@AdamDorwart
Copy link

AdamDorwart commented Aug 13, 2019

I was also able to replicate this pretty trivially on Ubuntu 18.04. This is with a cluster with nothing deployed to it and yet the k3s server consumed 20-30% cpu. I also evaluated microk8s and standard kubernetes deployed with kubeadm. Both only consumed about ~3% cpu at idle with nothing deployed on the same machine. These results were consistent across attempts and after I applyed deployments to the cluster. The load was immediate right after bring up. No logs emitted from the server suggest something going wrong.

This a show stopper for me moving forward with k3s unfortunately.

@runningman84
Copy link

The latest version 0.8.1 also suffers from this problem. I wonder how someone would run k3s on a low-power edge device when k3s itself consumes so many CPU cycles.

@erikwilson
Copy link
Contributor

If this is trivial to replicate it would be good to have steps to reproduce.

Ubuntu 18.04 seems to be the common theme, so I tried creating a VM:

Vagrant.configure(2) do |config|
  config.vm.box = "bento/ubuntu-18.04"
  config.vm.provider "virtualbox" do |v|
    v.cpus = 1
    v.memory = 2048
  end
end

However, after launching k3s with curl -sfL https://get.k3s.io | sh - and waiting a minute for the cluster to launch, the CPU consumption was ~%3 idle.

@johnae
Copy link

johnae commented Dec 24, 2019

I'm seeing this on k3s 1.0.1. I've been using rke before and haven't had this problem on the same host. In my case I'm using NixOS - https://nixos.org.

@drym3r
Copy link
Author

drym3r commented Dec 25, 2019

I've investigated about the load average and how it works. And it does in fact make sense that in a raspberry pi 3, that has 4 cores, the load avearge is of from 1 to 4.

I do still think that a load average of 2.69 for a master with little usage is too much, but since it's inside of what is supported, I'll close this issue.

If somebody doesn't agree, feel free to open another one or reopen this one.

PD: In the same machine, an agent without containers does have like 0.3 of load average, FYI.

@drym3r drym3r closed this as completed Dec 25, 2019
@helmut72
Copy link

Same behavior with my tests.

  • Raspberry 3 with Raspbian Light
  • same with Raspi 4 / 4GB
  • Ubuntu Server 18.04 in a VM / Braswell Host
  • Ubuntu Server 18.04 on Braswell Host

k3s consumes always about 20% CPU in top after initialization. 18% away from using it on an Edge device. 20% for a management tool when idling is a joke. A full installed Apache, PHP, MariaDB, Gitea, Postfix, Dovecot all in Docker Containers jumps around 3-5% when idling. On a slow Raspberry 3.

Easy to reproduce.

@saksmt
Copy link

saksmt commented Feb 21, 2020

Have similar issue here: idle cluster with 5-15 load average on master.
Cluster:

  • master: 4 core, 8Gb ram VDS (kvm)
  • worker: 2 core 4Gb ram VDS (kvm)

Apps: istio + single nginx serving static that has no requests at all.

Versions:

  • k3s v1.17.2+k3s1 (cdab19b)
  • istio:
    • client version: 1.4.5
    • control plane version: 1.4.5
    • data plane version: 1.4.5 (2 proxies)
  • OS: Linux master 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • systemd: 237 +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid

No pods are restarting or scaling so this is out of the picture. I suspect it may be related to health checks but it seems really strange that 19 pods can produce so much load (especially considering that all health checks should be in async IO and once per 2 seconds tops).

The most strange thing that the more I observe (sitting in SSH) the less load average becomes... Just after installation of the whole cluster load avg was about 4-5, then after good sleep (about 7 hours) I wanted to check that all is OK and load avg was 13-15, after some investigation (collecting logs, inspecting all versions, etc.) load avg dropped to 5 and now is raising again. I don't understand neither why nor how it is possible. All this time (regarding of load avg) k3s daemon process is jumping from 0-5% to X% (where x is ~ load avg * 100) for about a second and then back to 0-5%

kubectl top node:

NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master       1723m        43%    2275Mi          28%       
scala.node   116m         5%     1317Mi          33%  

kubectl top pod --all-namespaces:

NAMESPACE      NAME                                          CPU(cores)   MEMORY(bytes)   
blog           blog-frontend-85b4b8d4d6-7vdv9                5m           28Mi            
istio-system   grafana-5dfcbf9677-t4wcb                      20m          18Mi            
istio-system   istio-citadel-7d4689c4cf-wzpwc                2m           7Mi             
istio-system   istio-galley-c9647bf5-tsrm2                   152m         19Mi            
istio-system   istio-ingressgateway-67bffcc97f-vbnz9         9m           22Mi            
istio-system   istio-pilot-8897967c5-6x2rr                   18m          14Mi            
istio-system   istio-sidecar-injector-6fdc95467f-sqgsc       127m         7Mi             
istio-system   istio-telemetry-78c75b8b9-nblld               6m           19Mi            
istio-system   istio-tracing-649df9f4bc-5s8xt                30m          270Mi           
istio-system   kiali-867c85b4bd-47896                        2m           8Mi             
istio-system   prometheus-bc68dd6dc-nflvh                    109m         185Mi           
istio-system   secured-prom-f845c694-hzdqb                   0m           2Mi             
istio-system   secured-tracing-5bdb4bf48b-w2kxw              0m           2Mi             
istio-system   ssl-termination-edge-proxy-6bb9bdf867-2xkwd   0m           3Mi             
istio-system   svclb-ssl-termination-edge-proxy-7wzwb        0m           2Mi             
istio-system   svclb-ssl-termination-edge-proxy-qht9t        0m           1Mi             
kube-system    coredns-d798c9dd-cvrw4                        29m          9Mi             
kube-system    local-path-provisioner-58fb86bdfd-n882w       40m          7Mi             
kube-system    metrics-server-6d684c7b5-pn8zd                11m          13Mi 

k3s journalctl logs (last 100 lines):

Feb 21 12:50:45 master k3s[18342]: W0221 12:50:45.755119   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-workload-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:50:45 master k3s[18342]: W0221 12:50:45.756428   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-galley-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:50:45 master k3s[18342]: W0221 12:50:45.761659   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-pilot-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:50:45 master k3s[18342]: W0221 12:50:45.802047   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-mixer-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:50:45 master k3s[18342]: W0221 12:50:45.814517   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:51:29 master k3s[18342]: I0221 12:51:29.646590   18342 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Feb 21 12:52:00 master k3s[18342]: W0221 12:52:00.616923   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-mesh-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:52:00 master k3s[18342]: W0221 12:52:00.620100   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-service-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:52:00 master k3s[18342]: W0221 12:52:00.621750   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:52:00 master k3s[18342]: W0221 12:52:00.623491   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~secret/default-token-bfcgg and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:52:00 master k3s[18342]: W0221 12:52:00.623685   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-galley-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:52:00 master k3s[18342]: W0221 12:52:00.625416   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-mixer-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:52:00 master k3s[18342]: W0221 12:52:00.628072   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-citadel-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:52:00 master k3s[18342]: W0221 12:52:00.629662   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-pilot-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:52:00 master k3s[18342]: W0221 12:52:00.630752   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-performance-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:52:00 master k3s[18342]: W0221 12:52:00.634944   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-workload-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:52:29 master k3s[18342]: I0221 12:52:29.698895   18342 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Feb 21 12:53:18 master k3s[18342]: W0221 12:53:18.641179   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~secret/default-token-bfcgg and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:53:18 master k3s[18342]: W0221 12:53:18.643060   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-citadel-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:53:18 master k3s[18342]: W0221 12:53:18.644241   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-mesh-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:53:18 master k3s[18342]: W0221 12:53:18.645464   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-workload-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:53:18 master k3s[18342]: W0221 12:53:18.645704   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:53:18 master k3s[18342]: W0221 12:53:18.646113   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-mixer-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:53:18 master k3s[18342]: W0221 12:53:18.647266   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-performance-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:53:18 master k3s[18342]: W0221 12:53:18.647501   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-galley-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:53:18 master k3s[18342]: W0221 12:53:18.648394   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-service-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:53:18 master k3s[18342]: W0221 12:53:18.649018   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-pilot-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:53:29 master k3s[18342]: I0221 12:53:29.754930   18342 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Feb 21 12:54:22 master k3s[18342]: W0221 12:54:22.544684   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-pilot-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:54:22 master k3s[18342]: W0221 12:54:22.552140   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-mixer-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:54:22 master k3s[18342]: W0221 12:54:22.553658   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~secret/default-token-bfcgg and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:54:22 master k3s[18342]: W0221 12:54:22.555444   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:54:22 master k3s[18342]: W0221 12:54:22.557187   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-citadel-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:54:22 master k3s[18342]: W0221 12:54:22.560310   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-workload-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:54:22 master k3s[18342]: W0221 12:54:22.558646   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-galley-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:54:22 master k3s[18342]: W0221 12:54:22.559266   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-mesh-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:54:22 master k3s[18342]: W0221 12:54:22.559288   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-service-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:54:22 master k3s[18342]: W0221 12:54:22.559952   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-performance-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:54:29 master k3s[18342]: I0221 12:54:29.840474   18342 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Feb 21 12:55:29 master k3s[18342]: I0221 12:55:29.892486   18342 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Feb 21 12:55:46 master k3s[18342]: W0221 12:55:46.654650   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-service-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:55:46 master k3s[18342]: W0221 12:55:46.656157   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-citadel-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:55:46 master k3s[18342]: W0221 12:55:46.656361   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-mixer-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:55:46 master k3s[18342]: W0221 12:55:46.656446   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~secret/default-token-bfcgg and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:55:46 master k3s[18342]: W0221 12:55:46.657296   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-mesh-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:55:46 master k3s[18342]: W0221 12:55:46.657889   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-performance-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:55:46 master k3s[18342]: W0221 12:55:46.658169   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-galley-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:55:46 master k3s[18342]: W0221 12:55:46.658331   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:55:46 master k3s[18342]: W0221 12:55:46.659340   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-pilot-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:55:46 master k3s[18342]: W0221 12:55:46.663163   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-workload-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:56:30 master k3s[18342]: I0221 12:56:30.034176   18342 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Feb 21 12:56:40 master k3s[18342]: E0221 12:56:40.061403   18342 upgradeaware.go:357] Error proxying data from client to backend: tls: use of closed connection
Feb 21 12:56:45 master k3s[18342]: E0221 12:56:45.226739   18342 upgradeaware.go:357] Error proxying data from client to backend: EOF
Feb 21 12:56:45 master k3s[18342]: E0221 12:56:45.227130   18342 upgradeaware.go:371] Error proxying data from backend to client: tls: use of closed connection
Feb 21 12:57:08 master k3s[18342]: W0221 12:57:08.630236   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-mesh-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:57:08 master k3s[18342]: W0221 12:57:08.631870   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-service-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:57:08 master k3s[18342]: W0221 12:57:08.632818   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-galley-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:57:08 master k3s[18342]: W0221 12:57:08.633945   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-workload-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:57:08 master k3s[18342]: W0221 12:57:08.635160   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-pilot-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:57:08 master k3s[18342]: W0221 12:57:08.636099   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:57:08 master k3s[18342]: W0221 12:57:08.637074   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-mixer-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:57:08 master k3s[18342]: W0221 12:57:08.637854   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~secret/default-token-bfcgg and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:57:08 master k3s[18342]: W0221 12:57:08.639554   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-performance-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:57:08 master k3s[18342]: W0221 12:57:08.640618   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-citadel-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:57:30 master k3s[18342]: I0221 12:57:30.100736   18342 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Feb 21 12:58:06 master k3s[18342]: E0221 12:58:06.601367   18342 remote_runtime.go:351] ExecSync dc95ef787409873f3086e12974e80fad43752a4a489fbe9ec83871541978c192 '/usr/local/bin/galley probe --probe-path=/tmp/healthliveness --interval=10s' from runtime service failed: rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 1s exceeded: context deadline exceeded
Feb 21 12:58:28 master k3s[18342]: W0221 12:58:28.663301   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-performance-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:58:28 master k3s[18342]: W0221 12:58:28.663276   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-mesh-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:58:28 master k3s[18342]: W0221 12:58:28.664261   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-mixer-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:58:28 master k3s[18342]: W0221 12:58:28.664385   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-citadel-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:58:28 master k3s[18342]: W0221 12:58:28.665411   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:58:28 master k3s[18342]: W0221 12:58:28.665683   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~secret/default-token-bfcgg and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:58:28 master k3s[18342]: W0221 12:58:28.666269   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-workload-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:58:28 master k3s[18342]: W0221 12:58:28.667465   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-pilot-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:58:28 master k3s[18342]: W0221 12:58:28.667780   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-galley-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:58:28 master k3s[18342]: W0221 12:58:28.673220   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-service-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:58:30 master k3s[18342]: I0221 12:58:30.198787   18342 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Feb 21 12:59:07 master k3s[18342]: E0221 12:59:07.129099   18342 remote_runtime.go:351] ExecSync dc95ef787409873f3086e12974e80fad43752a4a489fbe9ec83871541978c192 '/usr/local/bin/galley probe --probe-path=/tmp/healthliveness --interval=10s' from runtime service failed: rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 1s exceeded: context deadline exceeded
Feb 21 12:59:30 master k3s[18342]: I0221 12:59:30.263218   18342 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Feb 21 12:59:49 master k3s[18342]: W0221 12:59:49.646923   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-mesh-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:59:49 master k3s[18342]: W0221 12:59:49.646914   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~secret/default-token-bfcgg and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:59:49 master k3s[18342]: W0221 12:59:49.647972   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-performance-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:59:49 master k3s[18342]: W0221 12:59:49.649327   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-galley-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:59:49 master k3s[18342]: W0221 12:59:49.651594   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-service-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:59:49 master k3s[18342]: W0221 12:59:49.657478   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-workload-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:59:49 master k3s[18342]: W0221 12:59:49.661854   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-citadel-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:59:49 master k3s[18342]: W0221 12:59:49.663625   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-pilot-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:59:49 master k3s[18342]: W0221 12:59:49.667435   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 12:59:49 master k3s[18342]: W0221 12:59:49.668398   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-mixer-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 13:00:30 master k3s[18342]: I0221 13:00:30.355881   18342 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Feb 21 13:01:17 master k3s[18342]: W0221 13:01:17.745478   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-workload-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 13:01:17 master k3s[18342]: W0221 13:01:17.747164   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-galley-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 13:01:17 master k3s[18342]: W0221 13:01:17.747246   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-performance-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 13:01:17 master k3s[18342]: W0221 13:01:17.748267   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-mesh-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 13:01:17 master k3s[18342]: W0221 13:01:17.748450   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-istio-service-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 13:01:17 master k3s[18342]: W0221 13:01:17.749308   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 13:01:17 master k3s[18342]: W0221 13:01:17.749532   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-pilot-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 13:01:17 master k3s[18342]: W0221 13:01:17.750209   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-mixer-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 13:01:17 master k3s[18342]: W0221 13:01:17.751260   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~secret/default-token-bfcgg and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Feb 21 13:01:17 master k3s[18342]: W0221 13:01:17.752816   18342 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/a87681bf-2765-4d3a-a2b5-61b0664d2cf5/volumes/kubernetes.io~configmap/dashboards-istio-citadel-dashboard and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699

@billimek
Copy link

Do you have many workloads which use an exec based liveness/readiness probes? I came across this issue and wonder if there could be some relationship?

@saksmt
Copy link

saksmt commented Feb 22, 2020

No there are only http probes

@saksmt
Copy link

saksmt commented Feb 22, 2020

Although it might be that k3s executes http probes similary to how reference implementation executes exec probes (i.e. not on some unblocking IO, but using a thread per probe) that would make sense of this behavior (but no sense at implementation level...)

@rcarmo
Copy link

rcarmo commented Feb 16, 2021

Not as far as I can see. Still takes up a lot of CPU even on low-end Intel VMs

@jadbaz
Copy link

jadbaz commented Feb 16, 2021

I know this is not the place for this but I'll go ahead and ask anyway
I'm running a single node k3s and I don't expect I'll add additional nodes anytime soon (simple setup, choosing this tech was a mistake, I know)
I'm wondering what I can optimize in my setup
I'm also a k8s newb of sorts
I'm wondering if there is something I can just disable (something that is only needed for multi-node setups)
k3s lists the following (ref):

 --disable (coredns|servicelb|traefik|local-storage|metrics-server)
 --disable-scheduler           
 --disable-cloud-controller    
 --disable-network-policy

@unixfox
Copy link

unixfox commented Feb 16, 2021

I know this is not the place for this but I'll go ahead and ask anyway
I'm running a single node k3s and I don't expect I'll add additional nodes anytime soon (simple setup, choosing this tech was a mistake, I know)
I'm wondering what I can optimize in my setup
I'm also a k8s newb of sorts
I'm wondering if there is something I can just disable (something that is only needed for multi-node setups)
k3s lists the following (ref):

 --disable (coredns|servicelb|traefik|local-storage|metrics-server)
 --disable-scheduler           
 --disable-cloud-controller    
 --disable-network-policy

All of these components are useful even with a single node setup and that's why there are enabled by default. But here is a small description of these components in case you want to disable some:

  • coredns: for resolving the DNS name of a pod from another pod
  • servicelb: exposing pods to the "external world"
  • traefik: web server like NGINX or apache
  • local storage: create PVC only locally available
  • metrics server: expose metrics about the pods. This one is maybe a good candidate to remove if you don't care about metrics.

About scheduler, cloud controller and network policy. Those are key components, no really need of removing them.

@joulester
Copy link

Im still seeing this on my vm. Did anyone solve this or find a workaround?

@armandleopold
Copy link

The issue is not resolved.

@boniek83
Copy link

boniek83 commented May 16, 2021

/usr/share/bcc/tools/filetop (from bcc-tools) + iotop are saying this is due to etcd being really write intensive. REALLY write intensive. 2-4MB/s of constant writes when kubernetes is completely idle (default installation of 3 masters and 2 workers). This is SSD killer (and when SLC buffer fills on your cheap QLC/TLC drive, SSD will grind to a halt) and completely unacceptable in a homelab environment. I also wonder about how this affects performance, if db of etcd is on BTRFS. I guess I will try Mysql next ;)
k3s 1.21

@brandond
Copy link
Member

brandond commented May 16, 2021

Yes, this is why k3s ships with sqlite by default, or external DB as the default HA option. Etcd is incredibly write intensive.

@AdamDorwart
Copy link

This issue still seems trivial to reproduce across any hardware. Despite being closed for over 2 years it still has tons of activity and consistent reports. I gave up on k3s long ago because of it.

Suggestions:

  1. Reopen this issue and address it for the health of the project.
  2. Can some one provide a profile report or flamegraph at idle? I suspect the problem will reveal itself quickly with this information.

@darkstar
Copy link

I am facing the same issue. Raspberry Pi 4 with Ubuntu 21.04 (AArch64), k3s v1.20.7+k3s1, single-node installation

CPU is constantly around 20-30%, so one full core used permanently by k3s-server.

I made a Flamegrapf according to the documentation of k3s-server and have attached it. I have no idea what I'm looking at so maybe someone else can see where the problem is :-D

Basically I did this:

perf record -F 99 -p $(pidof k3s-server) -g -- sleep 60
perf script > out.perf
./stackcollapse-perf.pl out.perf > out.folded
./flamegraph.pl out.folded > perf.svg

perf.svg.gz

@brandond
Copy link
Member

20% on a Pi4b is about right, see: resource-profiling

@AdamDorwart
Copy link

AdamDorwart commented May 22, 2021

Here's darkstar's flamegraph for anyone driving by
image

@darkstar From that flamegraph we can see the entire time is spent inside containerd but we don't have symbols to get any introspection on the stack there. We can see about half of the time is spent purely within containerd and a little bit less than the other half ends up in a system call el0_sync from containerd.

To get a more full picture make sure you're compiling k3s-server with debug symbols so we can get more detail on the stacktrace for both k3s and containerd. I really don't know this project that well but a quick look shows this build script omitting debug symbols with -w. Ensure after you recompile the binary that you still get similar results w.r.t. cpu load. That new binary's flamegraph will tell us more.

@brandond Those numbers seem unreasonably high. I think I remember plain k8s deployed with kubeadm to a single node taking ~3-5% on a consumer Intel Skylake 3.1Ghz. There is likely some optimization that can be done here.

@darkstar
Copy link

20% on a Pi4b is about right, see: resource-profiling

"20% of a core" is different to "25% overall" (which is basically "100% of a core" as the RPi4 has 4 cores), so a factor 5 too high.

@AdamDorwart I'll try and see what I can do about the symbols, thanks for the quick analysis

@darkstar
Copy link

Does anyone have a debug binary ready that they could share? Or a document explaining how to cross-build for aarch64 on x86_64?
I don't really feel like diving into the intricacies of the golang ecosystem, just for building a single executable... especially not on a RaspberryPi with an SD card ;-)

@plu
Copy link

plu commented Sep 10, 2021

Here the problem was connected to k3s and iptables, when dumping iptables -L > /tmp/iptables, the file was 7mb big. I've had this problem only on Debian Buster. When I was still running k3s on Ubuntu (cannot remember which version), I didn't have this problem. After upgrading Debian from Buster to Bullseye, the problem entirely disappeared.

image

@brandond
Copy link
Member

brandond commented Sep 10, 2021

@plu that sounds like #3117 (comment)

@brandond brandond marked this as a duplicate of #3117 Sep 10, 2021
@agorgl
Copy link

agorgl commented Sep 13, 2021

Fresh install on latest Raspbian, jumps between 20-60% CPU usage, did the dump with iptables -L > /tmp/iptables as @plu suggested, and the dump was only 31K

@yuchanns
Copy link

yuchanns commented Nov 21, 2021

Same issue here. On my AWS instance which type is t2.micro, the k3s-server cause high load average. It's a fresh install.

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          11.53    0.16    3.80    2.23   81.90    0.38

Device             tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd
loop0             3.25       138.15         0.00         0.00     305637          0          0
loop1             0.03         0.18         0.00         0.00        400          0          0
loop2             0.11         1.05         0.00         0.00       2325          0          0
loop3             0.03         0.49         0.00         0.00       1088          0          0
loop4             1.54        57.79         0.00         0.00     127851          0          0
loop5             0.03         0.17         0.00         0.00        377          0          0
loop6             0.12         1.08         0.00         0.00       2398          0          0
loop7             0.03         0.51         0.00         0.00       1119          0          0
xvda            138.26      3681.40       318.29         0.00    8144350     704156          0

--- update ---
It has been better after deletion of traefik deployment.

@dwilliss
Copy link

Same issue here on a 6 node cluster (3 servers, 3 agent).
I just spun up the cluster. I do have Rancher running, but the agent nodes are way lower CPU usage.
Server nodes are showing a load average of .7 but the k3s-server process is taking about 30% CPU. Proxmox shows the server VM using 20%
All nodes running on Proxmox VMs running latest Ubuntu Cloud Init image with 2 Cores (Xeon E5-2698 v3) and 6Gig RAM

@darkstar
Copy link

Same issue here on a 6 node cluster (3 servers, 3 agent). I just spun up the cluster. I do have Rancher running, but the agent nodes are way lower CPU usage. Server nodes are showing a load average of .7 but the k3s-server process is taking about 30% CPU. Proxmox shows the server VM using 20% All nodes running on Proxmox VMs running latest Ubuntu Cloud Init image with 2 Cores (Xeon E5-2698 v3) and 6Gig RAM

Which iptables version are you running? If it's anything below 1.8.6 you can just get rid of it since it has some nasty bug. Simply uninstall the iptables package, as k3s brings its own version that works fine

@dwilliss
Copy link

Which iptables version are you running? If it's anything below 1.8.6 you can just get rid of it since it has some nasty bug. Simply uninstall the iptables package, as k3s brings its own version that works fine

Whatever version is in Ubuntu 20.04 (Focal Fossa)

It seems that the problem may actually be etcd. I don't see why it has to write as much as it does. If the cluster is idle, then it should see that there is no change and not try to actually write anything. Or at least keep the writes in a cache that it flushes less frequently. I've read conflicting suggestions for etcd. One says to make sure it's writing to SSDs because spinning rust drives are too slow (again, it shouldn't care how long it takes to write. Ever heard of async IO?). The other advice is that it writes way too much and will kill SSDs.

@zdzichu
Copy link

zdzichu commented Dec 12, 2022

It seems that the problem may actually be etcd.

That's another problem, but unrelated. I'm using k3s with external PostgreSQL (no etcd anywhere), and idle k3s process regularly used ~30% CPU.

@GaruGaru
Copy link

GaruGaru commented Dec 26, 2022

I'm going to post here even if the issue is closed given the fact that many other users are experiencing the same problem.

High cpu usage on master node (50% cpu avg on all cores) on 2 Nodes cluster rpi4 - 8Gb - Ubuntu 20 64bit non-HA.

Profiling
Enable pprof profiling using --enable-pprof on the master.
Gather profile output using pprof server
url --insecure "https://<master>:6443/debug/pprof/profile?seconds=300" > profile.pprof

Results

This may be specific to my workload so it would be nice to gather some profiles from other clusters/workloads to narrow down the issue.

top 3
Showing top 3 nodes out of 509
      flat  flat%   sum%        cum   cum%
    65.98s 24.48% 24.48%     66.51s 24.68%  runtime.cgocall
    49.36s 18.32% 42.80%     49.36s 18.32%  runtime/internal/syscall.Syscall6
     8.92s  3.31% 46.11%      8.92s  3.31%  runtime.futex

The most cpu busy calls are related to sqllite and some file read sys call.

profile svg
pprof002

@chenlcacentury
Copy link

Same issue here, it happened suddenly and persists after a few restart.


k3s version v1.24.4+k3s1 (c3f830e)
go version go1.18.1

image

@m3talstorm
Copy link

m3talstorm commented Jan 24, 2023

Seeing the same on k3s v1.23.6, 3 masters, 3 workers all Ubuntu 22.04

image

@dzegarra
Copy link

dzegarra commented Feb 9, 2023

I also notice this CPU consumption, between 14% and 50% with almost no traffic.

I switched the apps of a custom-made NAS running on an Odroid, from docker-compose to k3s. Plex was able to software-transcode some simple videos before. With k3s, that's not possible. I assume it is because that ~20% of CPU k3s uses is really needed for this very humble hardware.

image

It was an experiment anyway. I'll have to come back to docker-compose.

@danielsand
Copy link

some observation - maybe it helps others:

3x masters 2x workers
average load on all 3 masters was around 20%-30% with overall 43 pods deployed all together.

take a closer look into your k3s etcd backend. (prometheus grafana will help or grab etcd statistics )
had a misbehaving helmchart running which installed some dependent charts that collided with my own installation.

etcd backend was constantly stressed because 4 deployments where fighting for control - that put a lot of stress onto the etcd backend which generated the load.

After finding the culprit helmchart and removing it - the load problem is gone - at least in the last 48+ hours
Screenshot 2023-03-21 at 23 08 08

cheers 🍻

@Menethoran
Copy link

has this been resolved? Im on a fresh install of 23.1.10.3 and am showing some serious CPU load issues (around 80% on a 12 core machine)

@brandond
Copy link
Member

brandond commented Feb 28, 2024

@Menethoran this is a year old issue; please open a new issue and fill out the issue template. It would also be helpful to make clear if you are talking about 80% of a core, or 80% of all available CPU resources? What process specifically is consuming the CPU time? How many nodes are in your cluster, and what sort of workload are they running?

@k3s-io k3s-io locked and limited conversation to collaborators Feb 28, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests