Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to start k3s service in the master node #4187

Closed
i5Js opened this issue Oct 11, 2021 · 24 comments
Closed

Unable to start k3s service in the master node #4187

i5Js opened this issue Oct 11, 2021 · 24 comments

Comments

@i5Js
Copy link

i5Js commented Oct 11, 2021

Hi,

After a power outage, I've realize, my master nodes can't boot up properly, please, could you help me??

Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.292576    3494 server.go:659] external host was not specified, using 192.168.1.242
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.296031    3494 server.go:196] Version: v1.20.4+k3s1
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.399832    3494 shared_informer.go:240] Waiting for caches to sync for node_authorizer
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.422451    3494 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.422797    3494 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.438570    3494 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.438781    3494 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.739131    3494 instance.go:289] Using reconciler: lease
Oct 11 15:44:20 localhost k3s[3494]: I1011 15:44:20.047468    3494 rest.go:131] the default service ipfamily for this cluster is: IPv4
Oct 11 15:44:21 localhost k3s[3494]: I1011 15:44:21.093158    3494 trace.go:205] Trace[1098700547]: "List etcd3" key:/secrets,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 15:44:19.930) (total time: 1162ms):
Oct 11 15:44:21 localhost k3s[3494]: Trace[1098700547]: [1.162803764s] [1.162803764s] END
Oct 11 15:44:21 localhost k3s[3494]: time="2021-10-11T15:44:21.211867275+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:21 http: TLS handshake error from 192.168.1.247:40382: remote error: tls: bad certificate"
Oct 11 15:44:21 localhost k3s[3494]: time="2021-10-11T15:44:21.291388240+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:21 http: TLS handshake error from 192.168.1.246:32898: remote error: tls: bad certificate"
Oct 11 15:44:21 localhost k3s[3494]: I1011 15:44:21.833274    3494 trace.go:205] Trace[907858497]: "List etcd3" key:/apiextensions.k8s.io/customresourcedefinitions,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 15:44:19.477) (total time: 2355ms):
Oct 11 15:44:21 localhost k3s[3494]: Trace[907858497]: [2.355249555s] [2.355249555s] END
Oct 11 15:44:22 localhost k3s[3494]: I1011 15:44:22.348477    3494 trace.go:205] Trace[1364057824]: "List etcd3" key:/apiextensions.k8s.io/customresourcedefinitions,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 15:44:19.469) (total time: 2878ms):
Oct 11 15:44:22 localhost k3s[3494]: Trace[1364057824]: [2.878577676s] [2.878577676s] END
Oct 11 15:44:22 localhost k3s[3494]: time="2021-10-11T15:44:22.409754393+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:22 http: TLS handshake error from 192.168.1.243:57622: remote error: tls: bad certificate"
Oct 11 15:44:22 localhost k3s[3494]: time="2021-10-11T15:44:22.516127365+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:22 http: TLS handshake error from 192.168.1.244:57702: remote error: tls: bad certificate"
Oct 11 15:44:22 localhost k3s[3494]: time="2021-10-11T15:44:22.572661771+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:22 http: TLS handshake error from 192.168.1.245:53968: remote error: tls: bad certificate"
Oct 11 15:44:23 localhost k3s[3494]: W1011 15:44:23.609847    3494 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
Oct 11 15:44:23 localhost k3s[3494]: W1011 15:44:23.751256    3494 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:23 localhost k3s[3494]: W1011 15:44:23.906221    3494 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.021273    3494 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.069812    3494 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.150211    3494 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.185404    3494 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.256981    3494 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.257151    3494 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: I1011 15:44:24.384797    3494 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 11 15:44:24 localhost k3s[3494]: I1011 15:44:24.384989    3494 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.493498252+02:00" level=info msg="Waiting for API server to become available"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.493904193+02:00" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --profiling=false --secure-port=0"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.502425570+02:00" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.580013076+02:00" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.580396830+02:00" level=info msg="To join node to cluster: k3s agent -s https://192.168.1.242:6443 -t ${NODE_TOKEN}"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.593023712+02:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.593392725+02:00" level=info msg="Run: k3s kubectl"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.594513204+02:00" level=info msg="Module overlay was already loaded"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.594706018+02:00" level=info msg="Module nf_conntrack was already loaded"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.594843259+02:00" level=info msg="Module br_netfilter was already loaded"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.594965447+02:00" level=info msg="Module iptable_nat was already loaded"
Oct 11 15:44:25 localhost k3s[3494]: time="2021-10-11T15:44:25.131056520+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:25 http: TLS handshake error from 127.0.0.1:46744: remote error: tls: bad certificate"
Oct 11 15:44:25 localhost k3s[3494]: time="2021-10-11T15:44:25.319794741+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:25 http: TLS handshake error from 127.0.0.1:46752: remote error: tls: bad certificate"
Oct 11 15:44:25 localhost k3s[3494]: time="2021-10-11T15:44:25.761972793+02:00" level=info msg="certificate CN=noldork3sm2 signed by CN=k3s-server-ca@1611348520: notBefore=2021-01-22 20:48:40 +0000 UTC notAfter=2022-10-11 13:44:25 +0000 UTC"
Oct 11 15:44:25 localhost k3s[3494]: time="2021-10-11T15:44:25.875424624+02:00" level=info msg="certificate CN=system:node:noldork3sm2,O=system:nodes signed by CN=k3s-client-ca@1611348520: notBefore=2021-01-22 20:48:40 +0000 UTC notAfter=2022-10-11 13:44:25 +0000 UTC"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.120051459+02:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.122832317+02:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.283677581+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:26 http: TLS handshake error from 192.168.1.247:40390: remote error: tls: bad certificate"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.422107128+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:26 http: TLS handshake error from 192.168.1.246:32906: remote error: tls: bad certificate"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.562304764+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.869943503+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.147306089+02:00" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.519192677+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:27 http: TLS handshake error from 192.168.1.243:57630: remote error: tls: bad certificate"
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.612221981+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:27 http: TLS handshake error from 192.168.1.244:57710: remote error: tls: bad certificate"
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.682761192+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:27 http: TLS handshake error from 192.168.1.245:53976: remote error: tls: bad certificate"
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.902814419+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.964965849+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:28 localhost k3s[3494]: time="2021-10-11T15:44:28.087406450+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:28 localhost k3s[3494]: time="2021-10-11T15:44:28.161383137+02:00" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Oct 11 15:44:29 localhost k3s[3494]: time="2021-10-11T15:44:29.181541574+02:00" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Oct 11 15:44:30 localhost k3s[3494]: time="2021-10-11T15:44:30.191166566+02:00" level=info msg="Containerd is now running"
Oct 11 15:44:30 localhost k3s[3494]: time="2021-10-11T15:44:30.704789001+02:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Oct 11 15:44:30 localhost k3s[3494]: time="2021-10-11T15:44:30.863986948+02:00" level=info msg="Handling backend connection request [noldork3sm2]"
Oct 11 15:44:30 localhost k3s[3494]: time="2021-10-11T15:44:30.871063103+02:00" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/912de41a65c99bc4d50bbb78e6106f3acbf3a70b8dead77b4c4ebc6755b4f9d6/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=noldork3sm2 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --register-with-taints=k3s-controlplane=true:NoSchedule --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Oct 11 15:44:30 localhost k3s[3494]: time="2021-10-11T15:44:30.877528837+02:00" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=noldork3sm2 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Oct 11 15:44:30 localhost k3s[3494]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Oct 11 15:44:30 localhost k3s[3494]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Oct 11 15:44:30 localhost k3s[3494]: W1011 15:44:30.885477    3494 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Oct 11 15:44:31 localhost systemd[1]: Started Kubernetes systemd probe.
Oct 11 15:44:31 localhost k3s[3494]: I1011 15:44:31.110173    3494 server.go:412] Version: v1.20.4+k3s1
Oct 11 15:44:31 localhost systemd[1]: run-rd3dafb4b84874f43849d3a6be96122d2.scope: Succeeded.
Oct 11 15:44:31 localhost k3s[3494]: time="2021-10-11T15:44:31.397245120+02:00" level=info msg="Node CIDR assigned for: noldork3sm2"
Oct 11 15:44:31 localhost k3s[3494]: I1011 15:44:31.399130    3494 flannel.go:92] Determining IP address of default interface
Oct 11 15:44:31 localhost k3s[3494]: I1011 15:44:31.406546    3494 flannel.go:105] Using interface with name eth0 and address 192.168.1.242
Oct 11 15:44:31 localhost k3s[3494]: I1011 15:44:31.439786    3494 kube.go:117] Waiting 10m0s for node controller to sync
Oct 11 15:44:31 localhost k3s[3494]: I1011 15:44:31.440206    3494 kube.go:300] Starting kube subnet manager
Oct 11 15:44:31 localhost k3s[3494]: time="2021-10-11T15:44:31.542731591+02:00" level=info msg="labels have already set on node: noldork3sm2"
Oct 11 15:44:31 localhost k3s[3494]: time="2021-10-11T15:44:31.622141811+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:31 http: TLS handshake error from 192.168.1.247:40418: remote error: tls: bad certificate"
Oct 11 15:44:31 localhost k3s[3494]: E1011 15:44:31.766657    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
Oct 11 15:44:31 localhost k3s[3494]: E1011 15:44:31.858410    3494 node.go:161] Failed to retrieve node info: nodes "noldork3sm2" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 11 15:44:31 localhost k3s[3494]: E1011 15:44:31.899419    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
Oct 11 15:44:31 localhost k3s[3494]: time="2021-10-11T15:44:31.945105707+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:31 http: TLS handshake error from 192.168.1.246:32934: remote error: tls: bad certificate"
Oct 11 15:44:31 localhost k3s[3494]: E1011 15:44:31.969123    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
Oct 11 15:44:31 localhost k3s[3494]: time="2021-10-11T15:44:31.994156966+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:32 localhost k3s[3494]: I1011 15:44:32.194272    3494 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
Oct 11 15:44:32 localhost k3s[3494]: time="2021-10-11T15:44:32.326172029+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:32 localhost k3s[3494]: I1011 15:44:32.443535    3494 kube.go:124] Node controller sync successful
Oct 11 15:44:32 localhost k3s[3494]: I1011 15:44:32.449585    3494 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Oct 11 15:44:32 localhost k3s[3494]: time="2021-10-11T15:44:32.943749512+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:32 http: TLS handshake error from 192.168.1.244:57738: remote error: tls: bad certificate"
Oct 11 15:44:33 localhost k3s[3494]: E1011 15:44:33.011722    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
Oct 11 15:44:33 localhost k3s[3494]: time="2021-10-11T15:44:33.020476322+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:33 http: TLS handshake error from 192.168.1.243:57658: remote error: tls: bad certificate"
Oct 11 15:44:33 localhost k3s[3494]: E1011 15:44:33.113077    3494 node.go:161] Failed to retrieve node info: nodes "noldork3sm2" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 11 15:44:33 localhost k3s[3494]: time="2021-10-11T15:44:33.138814897+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:33 http: TLS handshake error from 192.168.1.245:54004: remote error: tls: bad certificate"
Oct 11 15:44:33 localhost k3s[3494]: time="2021-10-11T15:44:33.165318095+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:33 localhost k3s[3494]: E1011 15:44:33.316889    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
Oct 11 15:44:33 localhost k3s[3494]: time="2021-10-11T15:44:33.362522247+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:33 localhost k3s[3494]: E1011 15:44:33.504235    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
Oct 11 15:44:33 localhost k3s[3494]: time="2021-10-11T15:44:33.517797286+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:34 localhost k3s[3494]: time="2021-10-11T15:44:34.612139961+02:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Oct 11 15:44:35 localhost k3s[3494]: E1011 15:44:35.321816    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
Oct 11 15:44:35 localhost k3s[3494]: E1011 15:44:35.331517    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
Oct 11 15:44:35 localhost k3s[3494]: E1011 15:44:35.383080    3494 node.go:161] Failed to retrieve node info: nodes "noldork3sm2" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 11 15:44:35 localhost k3s[3494]: E1011 15:44:35.681164    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
Oct 11 15:44:35 localhost k3s[3494]: time="2021-10-11T15:44:35.762841849+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 11 15:44:37 localhost k3s[3494]: time="2021-10-11T15:44:37.028444908+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:37 http: TLS handshake error from 192.168.1.247:40442: remote error: tls: bad certificate"
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.212074    3494 nvidia.go:81] Error reading "/sys/bus/pci/devices/": open /sys/bus/pci/devices/: no such file or directory
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.224774    3494 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.226547    3494 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.226918    3494 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu1/online: open /sys/devices/system/cpu/cpu1/online: no such file or directory
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.227352    3494 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu2/online: open /sys/devices/system/cpu/cpu2/online: no such file or directory
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.227908    3494 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu3/online: open /sys/devices/system/cpu/cpu3/online: no such file or directory
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.229521    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.229719    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.229883    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.230040    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: E1011 15:44:37.230103    3494 machine.go:72] Cannot read number of physical cores correctly, number of cores set to 0
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.231754    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.231935    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.232094    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.232238    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: E1011 15:44:37.232299    3494 machine.go:86] Cannot read number of sockets correctly, number of sockets set to 0
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.235732    3494 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.238433    3494 container_manager_linux.go:287] container manager verified user specified cgroup-root exists: []
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.239216    3494 container_manager_linux.go:292] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.239972    3494 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.240099    3494 container_manager_linux.go:323] [topologymanager] Initializing Topology Manager with none policy and container-level scope
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.240173    3494 container_manager_linux.go:328] Creating device plugin manager: true
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.242412    3494 kubelet.go:265] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.242668    3494 kubelet.go:276] Watching apiserver
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.243392    3494 kubelet.go:453] Kubelet client is not nil
Oct 11 15:44:37 localhost k3s[3494]: time="2021-10-11T15:44:37.243663523+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.251012    3494 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.3-k3s3, apiVersion: v1alpha2
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.255086    3494 server.go:1177] Started kubelet
Oct 11 15:44:37 localhost k3s[3494]: E1011 15:44:37.287874    3494 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
Oct 11 15:44:37 localhost k3s[3494]: E1011 15:44:37.288058    3494 kubelet.go:1292] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.289573    3494 server.go:148] Starting to listen on 0.0.0.0:10250
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.307930    3494 server.go:410] Adding debug handlers to kubelet server.
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.333512    3494 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.375940    3494 volume_manager.go:271] Starting Kubelet Volume Manager
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.401189    3494 desired_state_of_world_populator.go:142] Desired state populator starts to run
Oct 11 15:44:37 localhost k3s[3494]: time="2021-10-11T15:44:37.499390402+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:37 http: TLS handshake error from 192.168.1.246:32958: remote error: tls: bad certificate"
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.547089    3494 kubelet.go:449] kubelet nodes not sync
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.547440    3494 kubelet.go:449] kubelet nodes not sync
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.712551    3494 kubelet_node_status.go:71] Attempting to register node noldork3sm2
Oct 11 15:44:38 localhost k3s[3494]: time="2021-10-11T15:44:38.376591610+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:38 http: TLS handshake error from 192.168.1.244:57762: remote error: tls: bad certificate"
Oct 11 15:44:38 localhost k3s[3494]: time="2021-10-11T15:44:38.466238574+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:38 localhost k3s[3494]: time="2021-10-11T15:44:38.521111595+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:38 http: TLS handshake error from 192.168.1.243:57682: remote error: tls: bad certificate"
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.570206    3494 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.42.2.0/24
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.594525    3494 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.2.0/24
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.616714    3494 cpu_manager.go:193] [cpumanager] starting with none policy
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.616859    3494 cpu_manager.go:194] [cpumanager] reconciling every 10s
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.617089    3494 state_mem.go:36] [cpumanager] initializing new in-memory state store
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.625968    3494 state_mem.go:88] [cpumanager] updated default cpuset: ""
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.626139    3494 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.626312    3494 policy_none.go:43] [cpumanager] none policy: Start
Oct 11 15:44:38 localhost k3s[3494]: W1011 15:44:38.634372    3494 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.639185    3494 plugin_manager.go:114] Starting Kubelet Plugin Manager
Oct 11 15:44:38 localhost k3s[3494]: time="2021-10-11T15:44:38.685001955+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:38 http: TLS handshake error from 192.168.1.245:54028: remote error: tls: bad certificate"
Oct 11 15:44:38 localhost k3s[3494]: time="2021-10-11T15:44:38.862148835+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.903514    3494 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.903914    3494 status_manager.go:158] Starting to sync pod status with apiserver
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.904137    3494 kubelet.go:1829] Starting kubelet main sync loop.
Oct 11 15:44:38 localhost k3s[3494]: E1011 15:44:38.904614    3494 kubelet.go:1853] skipping pod synchronization - PLEG is not healthy: pleg has yet to be successful
Oct 11 15:44:39 localhost k3s[3494]: time="2021-10-11T15:44:39.028526715+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:39 localhost k3s[3494]: I1011 15:44:39.128733    3494 reconciler.go:157] Reconciler: start to sync state
Oct 11 15:44:39 localhost k3s[3494]: E1011 15:44:39.140997    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
Oct 11 15:44:39 localhost k3s[3494]: time="2021-10-11T15:44:39.175366145+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:39 localhost k3s[3494]: E1011 15:44:39.461582    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
Oct 11 15:44:39 localhost k3s[3494]: E1011 15:44:39.526817    3494 node.go:161] Failed to retrieve node info: nodes "noldork3sm2" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 11 15:44:40 localhost k3s[3494]: time="2021-10-11T15:44:40.823311390+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 11 15:44:41 localhost k3s[3494]: E1011 15:44:41.640768    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
Oct 11 15:44:42 localhost k3s[3494]: time="2021-10-11T15:44:42.274294551+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:42 http: TLS handshake error from 192.168.1.247:40466: remote error: tls: bad certificate"
Oct 11 15:44:42 localhost k3s[3494]: time="2021-10-11T15:44:42.440869505+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:43 localhost k3s[3494]: time="2021-10-11T15:44:43.514190368+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:43 http: TLS handshake error from 192.168.1.246:32982: remote error: tls: bad certificate"
Oct 11 15:44:43 localhost k3s[3494]: time="2021-10-11T15:44:43.844702540+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:43 localhost k3s[3494]: time="2021-10-11T15:44:43.905892693+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:43 http: TLS handshake error from 192.168.1.244:57786: remote error: tls: bad certificate"
Oct 11 15:44:44 localhost k3s[3494]: time="2021-10-11T15:44:44.092561406+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:44 http: TLS handshake error from 192.168.1.243:57706: remote error: tls: bad certificate"
Oct 11 15:44:44 localhost k3s[3494]: time="2021-10-11T15:44:44.124497041+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:44 localhost k3s[3494]: time="2021-10-11T15:44:44.223655170+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:44 http: TLS handshake error from 192.168.1.245:54052: remote error: tls: bad certificate"
Oct 11 15:44:44 localhost k3s[3494]: time="2021-10-11T15:44:44.446750283+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:44 localhost k3s[3494]: time="2021-10-11T15:44:44.574290005+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:45 localhost k3s[3494]: time="2021-10-11T15:44:45.652838945+02:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Oct 11 15:44:45 localhost k3s[3494]: time="2021-10-11T15:44:45.888170771+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 11 15:44:47 localhost k3s[3494]: E1011 15:44:47.039253    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
Oct 11 15:44:47 localhost k3s[3494]: time="2021-10-11T15:44:47.470846416+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:47 http: TLS handshake error from 192.168.1.247:40490: remote error: tls: bad certificate"
Oct 11 15:44:47 localhost k3s[3494]: time="2021-10-11T15:44:47.666725195+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:48 localhost k3s[3494]: I1011 15:44:48.185482    3494 trace.go:205] Trace[1021595152]: "Create" url:/api/v1/namespaces/default/events,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (11-Oct-2021 15:44:38.179) (total time: 10005ms):
Oct 11 15:44:48 localhost k3s[3494]: Trace[1021595152]: [10.005268937s] [10.005268937s] END
Oct 11 15:44:48 localhost k3s[3494]: E1011 15:44:48.192958    3494 node.go:161] Failed to retrieve node info: nodes "noldork3sm2" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 11 15:44:48 localhost k3s[3494]: E1011 15:44:48.310307    3494 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"noldork3sm2.16acfdd9ed048b6f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"noldork3sm2", UID:"noldork3sm2", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"noldork3sm2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0512dd14f31196f, ext:20715723711, loc:(*time.Location)(0x5917578)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0512dd14f31196f, ext:20715723711, loc:(*time.Location)(0x5917578)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "noldork3sm2.16acfdd9ed048b6f" is forbidden: not yet ready to handle request' (will not retry!)
Oct 11 15:44:48 localhost k3s[3494]: E1011 15:44:48.335745    3494 controller.go:187] failed to update lease, error: Put "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/noldork3sm2?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Oct 11 15:44:48 localhost k3s[3494]: I1011 15:44:48.361750    3494 trace.go:205] Trace[1287239117]: "Create" url:/api/v1/nodes,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (11-Oct-2021 15:44:38.327) (total time: 10034ms):
Oct 11 15:44:48 localhost k3s[3494]: Trace[1287239117]: [10.034476968s] [10.034476968s] END
Oct 11 15:44:48 localhost k3s[3494]: E1011 15:44:48.366844    3494 kubelet_node_status.go:93] Unable to register node "noldork3sm2" with API server: nodes "noldork3sm2" is forbidden: not yet ready to handle request
Oct 11 15:44:48 localhost k3s[3494]: I1011 15:44:48.539107    3494 trace.go:205] Trace[1957906643]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/noldork3sm2,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (11-Oct-2021 15:44:38.534) (total time: 10003ms):
Oct 11 15:44:48 localhost k3s[3494]: Trace[1957906643]: [10.00398902s] [10.00398902s] END
Oct 11 15:44:48 localhost k3s[3494]: I1011 15:44:48.573959    3494 trace.go:205] Trace[349769923]: "GuaranteedUpdate etcd3" type:*coordination.Lease (11-Oct-2021 15:44:38.559) (total time: 10014ms):
Oct 11 15:44:48 localhost k3s[3494]: Trace[349769923]: [10.014266289s] [10.014266289s] END
Oct 11 15:44:48 localhost k3s[3494]: I1011 15:44:48.576147    3494 kubelet_node_status.go:71] Attempting to register node noldork3sm2
Oct 11 15:44:48 localhost k3s[3494]: E1011 15:44:48.602808    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
Oct 11 15:44:48 localhost k3s[3494]: time="2021-10-11T15:44:48.894876995+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:48 http: TLS handshake error from 192.168.1.246:33006: remote error: tls: bad certificate"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.167525108+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:49 http: TLS handshake error from 192.168.1.244:57810: remote error: tls: bad certificate"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.253824006+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.388213202+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.501691650+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:49 http: TLS handshake error from 192.168.1.243:57730: remote error: tls: bad certificate"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.627186657+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:49 http: TLS handshake error from 192.168.1.245:54076: remote error: tls: bad certificate"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.878580737+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.981068208+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:50 localhost k3s[3494]: time="2021-10-11T15:44:50.968247637+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 11 15:44:52 localhost k3s[3494]: E1011 15:44:52.293654    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
Oct 11 15:44:52 localhost k3s[3494]: time="2021-10-11T15:44:52.700332454+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:52 http: TLS handshake error from 192.168.1.247:40514: remote error: tls: bad certificate"
Oct 11 15:44:52 localhost k3s[3494]: I1011 15:44:52.776908    3494 trace.go:205] Trace[802387318]: "GuaranteedUpdate etcd3" type:*core.Node (11-Oct-2021 15:44:32.726) (total time: 20050ms):
Oct 11 15:44:52 localhost k3s[3494]: Trace[802387318]: [20.05053322s] [20.05053322s] END
Oct 11 15:44:52 localhost k3s[3494]: I1011 15:44:52.777779    3494 trace.go:205] Trace[1432266599]: "Patch" url:/api/v1/nodes/noldork3sm2/status,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (11-Oct-2021 15:44:32.724) (total time: 20052ms):
Oct 11 15:44:52 localhost k3s[3494]: Trace[1432266599]: ---"About to apply patch" 10010ms (15:44:00.762)
Oct 11 15:44:52 localhost k3s[3494]: Trace[1432266599]: [20.05269579s] [20.05269579s] END
Oct 11 15:44:52 localhost k3s[3494]: time="2021-10-11T15:44:52.835532430+02:00" level=fatal msg="flannel exited: failed to acquire lease: nodes \"noldork3sm2\" is forbidden: not yet ready to handle request"
Oct 11 15:44:53 localhost systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 15:44:53 localhost systemd[1]: k3s.service: Failed with result 'exit-code'.
Oct 11 15:44:53 localhost systemd[1]: Failed to start Lightweight Kubernetes.

It's a raspberry cluster, running v1.20.4+k3s1, two master nodes, and 5 workers.

@brandond
Copy link
Member

Based on all the messages reporting 10+ second datastore operations, it would appear that your node has poor IO latency:

Oct 11 15:44:48 localhost k3s[3494]: Trace[1021595152]: [10.005268937s] [10.005268937s] END

You didn't fill out the Bug Report issue template so I don't have any information what your cluster or nodes look like. What else is going on with this node? Do you need to do a fsck, wait for a raid scrub to complete, etc? Is it possible your disk was damaged during the outage, or is it perhaps just not able to keep up with the load of both starting workload pods and also servicing the datastore?

@i5Js
Copy link
Author

i5Js commented Oct 11, 2021

Hi Brandond

Many thanks for your quick answer and your tip.

Poor performance, interesting, I'll check the sd card, since it was an outage, perhaps it's performing an deep-check.

What else should I check?.

I didn't filled the Bug Report, since I don't think it's a bug, the cluster was working fine for more than 200 days.

Best

i5Js

@brandond
Copy link
Member

SD cards are pretty easy to burn out with the sort of repetitive writes that the Kubernetes datastore does. As they fill up and/or run out of write cycles the performance dips as they try to find fresh cells that can be written to. In general I recommend against using them for anything critical. If this is a Raspberry Pi i usually recommend picking up a USB SATA or NVMe device to use for storage on the server node.

@i5Js
Copy link
Author

i5Js commented Oct 11, 2021

Here is the Bug Report:

Environmental Info:
K3s Version:

Masters and Workers: pi@noldork3sM2:~ $ k3s -v
k3s version v1.20.4+k3s1 (838a906a)
go version go1.15.8

Node(s) CPU architecture, OS, and Version:

Linux noldork3sM2 5.4.83-v7+ #1379 SMP Mon Dec 14 13:08:57 GMT 2020 armv7l GNU/Linux
Linux noldork3sN1 5.4.83-v7+ #1379 SMP Mon Dec 14 13:08:57 GMT 2020 armv7l GNU/Linux
Linux noldork3sN3 5.4.83-v7+ #1379 SMP Mon Dec 14 13:08:57 GMT 2020 armv7l GNU/Linux
Linux noldork3sN4 5.4.83-v7+ #1379 SMP Mon Dec 14 13:08:57 GMT 2020 armv7l GNU/Linux
Linux noldork3sN2 5.10.19-rockchip64 #1 SMP PREEMPT Sat Feb 27 22:22:17 CET 2021 aarch64 aarch64 aarch64 GNU/Linux
Linux noldork3sN5 5.10.21-rockchip64 #21.02.3 SMP PREEMPT Mon Mar 8 01:05:08 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

Cluster Configuration:

2 Master nodes
5 Workers nodes
Describe the bug:

After a power outage, the master nodes can't start the k3s service with the logs attached in the first post.
Steps To Reproduce:

  • Installed K3s:
    N/A
    Expected behavior:

K3s service should start
Actual behavior:

K3s service is trying to restart all time
Additional context / logs:

Backporting

  • Needs backporting to older releases

@i5Js
Copy link
Author

i5Js commented Oct 11, 2021

Thanks for your quick comment Brandond.

The datastores are mounted via NFS into another server, using an Nvme memory:

pi@noldork3sM2:~ $ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        29G  2.1G   26G   8% /
devtmpfs        455M     0  455M   0% /dev
tmpfs           488M     0  488M   0% /dev/shm
tmpfs           488M   13M  475M   3% /run
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           488M     0  488M   0% /sys/fs/cgroup
/dev/mmcblk0p1  253M   47M  206M  19% /boot
tmpfs            98M     0   98M   0% /run/user/1000
pi@noldork3sM2:~ $ mount
/dev/mmcblk0p2 on / type ext4 (rw,noatime)
devtmpfs on /dev type devtmpfs (rw,relatime,size=465608k,nr_inodes=116402,mode=755)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mmcblk0p1 on /boot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=99776k,mode=700,uid=1000,gid=1000)

@brandond
Copy link
Member

Can you test your sdcard as described here? #2903 (comment)

@brandond
Copy link
Member

brandond commented Oct 11, 2021

I'll also note that I've not had great results with datastores hosted on NFS. If you're going to host the datastore on another server, you might get better results from running mysql or postgres on that node and pointing at that via --datastore-endpoint, as opposed to just keeping the sqlite or etcd data on a NFS volume.

@i5Js
Copy link
Author

i5Js commented Oct 11, 2021

I'm unable to install fio :(

pi@noldork3sM2:~ $ sudo apt --fix-broken install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following package was automatically installed and is no longer required:
  rpi-eeprom-images
Use 'sudo apt autoremove' to remove it.
The following additional packages will be installed:
  adwaita-icon-theme at-spi2-core dbus-user-session dconf-gsettings-backend dconf-service fio fontconfig fontconfig-config fonts-dejavu-core glib-networking glib-networking-common glib-networking-services gsettings-desktop-schemas
  gtk-update-icon-cache hicolor-icon-theme ibverbs-providers libaio1 libatk-bridge2.0-0 libatk1.0-0 libatk1.0-data libatspi2.0-0 libavahi-client3 libboost-atomic1.67.0 libboost-iostreams1.67.0 libboost-regex1.67.0 libboost-system1.67.0
  libboost-thread1.67.0 libcairo-gobject2 libcairo2 libcolord2 libcroco3 libcups2 libdatrie1 libdconf1 libepoxy0 libfontconfig1 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-bin libgdk-pixbuf2.0-common libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0
  libgraphite2-3 libgtk-3-0 libgtk-3-bin libgtk-3-common libgtksourceview-3.0-1 libgtksourceview-3.0-common libharfbuzz0b libibverbs1 libjbig0 libjson-glib-1.0-0 libjson-glib-1.0-common liblcms2-2 libnspr4 libnss3 libnuma1 libpango-1.0-0
  libpangocairo-1.0-0 libpangoft2-1.0-0 libpixman-1-0 libproxy1v5 librados2 librbd1 librdmacm1 librest-0.7-0 librsvg2-2 librsvg2-common libsoup-gnome2.4-1 libsoup2.4-1 libthai-data libthai0 libtiff5 libwayland-client0 libwayland-cursor0
  libwayland-egl1 libwebp6 libxcb-render0 libxcb-shm0 libxcomposite1 libxcursor1 libxdamage1 libxfixes3 libxi6 libxinerama1 libxkbcommon0 libxrandr2 libxrender1 libxtst6 mousepad x11-common
Suggested packages:
  gnuplot gfio python-scipy colord cups-common gvfs liblcms2-utils librsvg2-bin
The following NEW packages will be installed:
  adwaita-icon-theme at-spi2-core dbus-user-session dconf-gsettings-backend dconf-service fio fontconfig fontconfig-config fonts-dejavu-core glib-networking glib-networking-common glib-networking-services gsettings-desktop-schemas
  gtk-update-icon-cache hicolor-icon-theme ibverbs-providers libaio1 libatk-bridge2.0-0 libatk1.0-0 libatk1.0-data libatspi2.0-0 libavahi-client3 libboost-atomic1.67.0 libboost-iostreams1.67.0 libboost-regex1.67.0 libboost-system1.67.0
  libboost-thread1.67.0 libcairo-gobject2 libcairo2 libcolord2 libcroco3 libcups2 libdatrie1 libdconf1 libepoxy0 libfontconfig1 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-bin libgdk-pixbuf2.0-common libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0
  libgraphite2-3 libgtk-3-0 libgtk-3-bin libgtk-3-common libgtksourceview-3.0-1 libgtksourceview-3.0-common libharfbuzz0b libibverbs1 libjbig0 libjson-glib-1.0-0 libjson-glib-1.0-common liblcms2-2 libnspr4 libnss3 libnuma1 libpango-1.0-0
  libpangocairo-1.0-0 libpangoft2-1.0-0 libpixman-1-0 libproxy1v5 librados2 librbd1 librdmacm1 librest-0.7-0 librsvg2-2 librsvg2-common libsoup-gnome2.4-1 libsoup2.4-1 libthai-data libthai0 libtiff5 libwayland-client0 libwayland-cursor0
  libwayland-egl1 libwebp6 libxcb-render0 libxcb-shm0 libxcomposite1 libxcursor1 libxdamage1 libxfixes3 libxi6 libxinerama1 libxkbcommon0 libxrandr2 libxrender1 libxtst6 mousepad x11-common
0 upgraded, 92 newly installed, 0 to remove and 85 not upgraded.
1 not fully installed or removed.
Need to get 228 kB/45.4 MB of archives.
After this operation, 140 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Err:1 http://raspbian.raspberrypi.org/raspbian buster/main armhf libwebp6 armhf 0.6.1-2
  404  Not Found [IP: 93.93.128.193 80]
E: Failed to fetch http://raspbian.raspberrypi.org/raspbian/pool/main/libw/libwebp/libwebp6_0.6.1-2_armhf.deb  404  Not Found [IP: 93.93.128.193 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

The mysql server is in the NFS server too.

@brandond
Copy link
Member

Yes, but the difference would be that the disk is local to the server when using mysql against that server, as opposed to running a database on top of NFS.

@i5Js
Copy link
Author

i5Js commented Oct 12, 2021

I'm sorry, but I don't understand your last comment.

Anyway, I was trying a different test yesterday with one of the masters, and, after uninstalling the current version and installing the last one, the service started successfully, although there were a lot of certificate issues with the worker nodes, so I guess it could not be a database issue.

@i5Js
Copy link
Author

i5Js commented Oct 12, 2021

I've tested the card... it seems, it's time to buy a new one or reinstall it:

Run 1


prepare-file;0;0;4397;8
seq-write;0;0;4709;9
rand-4k-write;0;0;2048;512
rand-4k-read;6656;1664;0;0
Sequential write speed 4709 KB/sec (target 10000) - FAIL
Note that sequential write speed declines over time as a card is used - your card may require reformatting
Random write speed 512 IOPS (target 500) - PASS
Random read speed 1664 IOPS (target 1500) - PASS
Run 2
prepare-file;0;0;5168;10
seq-write;0;0;3813;7
rand-4k-write;0;0;2189;547
rand-4k-read;6644;1661;0;0
Sequential write speed 3813 KB/sec (target 10000) - FAIL
Note that sequential write speed declines over time as a card is used - your card may require reformatting
Random write speed 547 IOPS (target 500) - PASS
Random read speed 1661 IOPS (target 1500) - PASS
Run 3
prepare-file;0;0;8642;16
seq-write;0;0;5367;10
rand-4k-write;0;0;1126;281
rand-4k-read;6648;1662;0;0
Sequential write speed 5367 KB/sec (target 10000) - FAIL
Note that sequential write speed declines over time as a card is used - your card may require reformatting
Random write speed 281 IOPS (target 500) - FAIL
Random read speed 1662 IOPS (target 1500) - PASS

I'll try in the other master.

@i5Js
Copy link
Author

i5Js commented Oct 12, 2021

the other master seems better, but....

Run 1
prepare-file;0;0;9988;19
seq-write;0;0;10582;20
rand-4k-write;0;0;793;198
rand-4k-read;8378;2094;0;0
Sequential write speed 10582 KB/sec (target 10000) - PASS
Random write speed 198 IOPS (target 500) - FAIL
Random read speed 2094 IOPS (target 1500) - PASS
Run 2
prepare-file;0;0;10926;21
seq-write;0;0;10848;21
rand-4k-write;0;0;1364;341
rand-4k-read;8515;2128;0;0
Sequential write speed 10848 KB/sec (target 10000) - PASS
Random write speed 341 IOPS (target 500) - FAIL
Random read speed 2128 IOPS (target 1500) - PASS
Run 3
prepare-file;0;0;13333;26
seq-write;0;0;13733;26
rand-4k-write;0;0;2297;574
rand-4k-read;6415;1603;0;0
Sequential write speed 13733 KB/sec (target 10000) - PASS
Random write speed 574 IOPS (target 500) - PASS
Random read speed 1603 IOPS (target 1500) - PASS

@i5Js
Copy link
Author

i5Js commented Oct 12, 2021

I've performed an fsck into a master one, after that:

root@noldork3sM1:/home/pi# . /usr/share/agnostics/sdtest.sh
Run 1
prepare-file;0;0;18068;35
seq-write;0;0;19297;37
rand-4k-write;0;0;3014;753
rand-4k-read;8548;2137;0;0
Sequential write speed 19297 KB/sec (target 10000) - PASS
Random write speed 753 IOPS (target 500) - PASS
Random read speed 2137 IOPS (target 1500) - PASS

I'm going to try to install it again.

@i5Js
Copy link
Author

i5Js commented Oct 12, 2021

Same:

Oct 12 09:49:46 localhost systemd[1]: Reloading.
Oct 12 09:49:47 localhost systemd[1]: [email protected]: Current command vanished from the unit file, execution of the command list won't be resumed.
Oct 12 09:49:47 localhost systemd[1]: [email protected]: Current command vanished from the unit file, execution of the command list won't be resumed.
Oct 12 09:49:48 localhost systemd[1]: Reloading.
Oct 12 09:49:51 localhost systemd[1]: Reached target Network is Online.
Oct 12 09:49:51 localhost systemd[1]: Starting Lightweight Kubernetes...
**Oct 12 09:49:51 localhost sh[626]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Oct 12 09:49:51 localhost sh[626]: /bin/sh: 1: /usr/bin/systemctl: not found**
Oct 12 09:49:51 localhost kernel: [  259.693550] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 12 09:49:51 localhost kernel: [  259.700390] Bridge firewalling registered
Oct 12 09:49:51 localhost k3s[632]: time="2021-10-12T09:49:51+02:00" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Oct 12 09:49:51 localhost k3s[632]: time="2021-10-12T09:49:51+02:00" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/912de41a65c99bc4d50bbb78e6106f3acbf3a70b8dead77b4c4ebc6755b4f9d6"
Oct 12 09:50:19 localhost k3s[632]: time="2021-10-12T09:50:19.894753084+02:00" level=info msg="Starting k3s v1.20.4+k3s1 (838a906a)"
Oct 12 09:50:19 localhost k3s[632]: time="2021-10-12T09:50:19.911064502+02:00" level=info msg="Configuring mysql database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
Oct 12 09:50:19 localhost k3s[632]: time="2021-10-12T09:50:19.912758854+02:00" level=info msg="Configuring database table schema and indexes, this may take a moment..."
Oct 12 09:50:19 localhost k3s[632]: time="2021-10-12T09:50:19.922263700+02:00" level=info msg="Database tables and indexes are up to date"
Oct 12 09:50:19 localhost k3s[632]: time="2021-10-12T09:50:19.936193100+02:00" level=info msg="Kine listening on unix://kine.sock"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.104682665+02:00" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.138183530+02:00" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.167824903+02:00" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.197357057+02:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.225401373+02:00" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.253142194+02:00" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.300342920+02:00" level=info msg="certificate CN=cloud-controller-manager signed by CN=k3s-client-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.330760140+02:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.366096295+02:00" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.396188457+02:00" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1633964991: notBefore=2021-10-11 15:09:51 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.424065843+02:00" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1633964991: notBefore=2021-10-11 15:09:51 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.454707755+02:00" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1633964991: notBefore=2021-10-11 15:09:51 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.466295497+02:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=ServiceAccountIssuerDiscovery=false --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Oct 12 09:50:20 localhost k3s[632]: Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
Oct 12 09:50:20 localhost k3s[632]: I1012 09:50:20.480379     632 server.go:659] external host was not specified, using 192.168.1.241
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.482202530+02:00" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:20 +0000 UTC"
Oct 12 09:50:20 localhost k3s[632]: I1012 09:50:20.486801     632 server.go:196] Version: v1.20.4+k3s1
Oct 12 09:50:20 localhost k3s[632]: time="2021-10-12T09:50:20.488811285+02:00" level=info msg="Active TLS secret  (ver=) (count 8): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.1.241:192.168.1.241 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-noldork3s.noldor.local:noldork3s.noldor.local listener.cattle.io/fingerprint:SHA1=0850F67A017FA6B3F0FD11DA177AD0AF93E9A823]"
Oct 12 09:50:22 localhost k3s[632]: time="2021-10-12T09:50:22.624184980+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:22 http: TLS handshake error from 192.168.1.247:33116: remote error: tls: bad certificate"
Oct 12 09:50:22 localhost k3s[632]: time="2021-10-12T09:50:22.853233069+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:22 http: TLS handshake error from 192.168.1.244:47384: remote error: tls: bad certificate"
Oct 12 09:50:23 localhost k3s[632]: time="2021-10-12T09:50:23.300006860+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:23 http: TLS handshake error from 192.168.1.246:46322: remote error: tls: bad certificate"
Oct 12 09:50:23 localhost k3s[632]: time="2021-10-12T09:50:23.756582904+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:23 http: TLS handshake error from 192.168.1.245:59176: remote error: tls: bad certificate"
Oct 12 09:50:27 localhost k3s[632]: time="2021-10-12T09:50:27.688698322+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:27 http: TLS handshake error from 192.168.1.247:33124: remote error: tls: bad certificate"
Oct 12 09:50:27 localhost k3s[632]: time="2021-10-12T09:50:27.920123264+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:27 http: TLS handshake error from 192.168.1.244:47392: remote error: tls: bad certificate"
Oct 12 09:50:28 localhost k3s[632]: time="2021-10-12T09:50:28.409624809+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:28 http: TLS handshake error from 192.168.1.246:46330: remote error: tls: bad certificate"
Oct 12 09:50:28 localhost k3s[632]: time="2021-10-12T09:50:28.859764874+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:28 http: TLS handshake error from 192.168.1.245:59184: remote error: tls: bad certificate"
Oct 12 09:50:32 localhost k3s[632]: time="2021-10-12T09:50:32.751132608+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:32 http: TLS handshake error from 192.168.1.247:33132: remote error: tls: bad certificate"
Oct 12 09:50:32 localhost k3s[632]: time="2021-10-12T09:50:32.989448595+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:32 http: TLS handshake error from 192.168.1.244:47400: remote error: tls: bad certificate"
Oct 12 09:50:33 localhost k3s[632]: time="2021-10-12T09:50:33.517196377+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:33 http: TLS handshake error from 192.168.1.246:46338: remote error: tls: bad certificate"
Oct 12 09:50:33 localhost k3s[632]: time="2021-10-12T09:50:33.963143046+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:33 http: TLS handshake error from 192.168.1.245:59192: remote error: tls: bad certificate"
Oct 12 09:50:34 localhost k3s[632]: I1012 09:50:34.383184     632 shared_informer.go:240] Waiting for caches to sync for node_authorizer
Oct 12 09:50:34 localhost k3s[632]: I1012 09:50:34.399140     632 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 12 09:50:34 localhost k3s[632]: I1012 09:50:34.399399     632 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 12 09:50:34 localhost k3s[632]: I1012 09:50:34.414607     632 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 12 09:50:34 localhost k3s[632]: I1012 09:50:34.415528     632 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 12 09:50:34 localhost k3s[632]: I1012 09:50:34.702777     632 instance.go:289] Using reconciler: lease
Oct 12 09:50:34 localhost k3s[632]: I1012 09:50:34.964442     632 rest.go:131] the default service ipfamily for this cluster is: IPv4
Oct 12 09:50:35 localhost k3s[632]: I1012 09:50:35.684393     632 trace.go:205] Trace[106536233]: "List etcd3" key:/secrets,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (12-Oct-2021 09:50:34.839) (total time: 844ms):
Oct 12 09:50:35 localhost k3s[632]: Trace[106536233]: [844.915211ms] [844.915211ms] END
Oct 12 09:50:36 localhost k3s[632]: I1012 09:50:36.584999     632 trace.go:205] Trace[1259061099]: "List etcd3" key:/apiextensions.k8s.io/customresourcedefinitions,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (12-Oct-2021 09:50:34.495) (total time: 2089ms):
Oct 12 09:50:36 localhost k3s[632]: Trace[1259061099]: [2.089409768s] [2.089409768s] END
Oct 12 09:50:36 localhost k3s[632]: I1012 09:50:36.704699     632 trace.go:205] Trace[770724692]: "List etcd3" key:/apiextensions.k8s.io/customresourcedefinitions,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (12-Oct-2021 09:50:34.500) (total time: 2204ms):
Oct 12 09:50:36 localhost k3s[632]: Trace[770724692]: [2.204141305s] [2.204141305s] END
Oct 12 09:50:37 localhost k3s[632]: time="2021-10-12T09:50:37.819151606+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:37 http: TLS handshake error from 192.168.1.247:33140: remote error: tls: bad certificate"
Oct 12 09:50:37 localhost k3s[632]: I1012 09:50:37.911804     632 trace.go:205] Trace[343922089]: "List etcd3" key:/mutatingwebhookconfigurations,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (12-Oct-2021 09:50:36.889) (total time: 1021ms):
Oct 12 09:50:37 localhost k3s[632]: Trace[343922089]: [1.021653329s] [1.021653329s] END
Oct 12 09:50:38 localhost k3s[632]: time="2021-10-12T09:50:38.066173240+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:38 http: TLS handshake error from 192.168.1.244:47408: remote error: tls: bad certificate"
Oct 12 09:50:38 localhost k3s[632]: W1012 09:50:38.299434     632 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
Oct 12 09:50:38 localhost k3s[632]: W1012 09:50:38.435697     632 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
Oct 12 09:50:38 localhost k3s[632]: W1012 09:50:38.583717     632 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Oct 12 09:50:38 localhost k3s[632]: time="2021-10-12T09:50:38.628389375+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:38 http: TLS handshake error from 192.168.1.246:46346: remote error: tls: bad certificate"
Oct 12 09:50:38 localhost k3s[632]: W1012 09:50:38.702437     632 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Oct 12 09:50:38 localhost k3s[632]: W1012 09:50:38.751287     632 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Oct 12 09:50:38 localhost k3s[632]: W1012 09:50:38.880564     632 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Oct 12 09:50:38 localhost k3s[632]: W1012 09:50:38.950443     632 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
Oct 12 09:50:39 localhost k3s[632]: time="2021-10-12T09:50:39.070113306+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:39 http: TLS handshake error from 192.168.1.245:59200: remote error: tls: bad certificate"
Oct 12 09:50:39 localhost k3s[632]: W1012 09:50:39.076542     632 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
Oct 12 09:50:39 localhost k3s[632]: W1012 09:50:39.076704     632 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
Oct 12 09:50:39 localhost k3s[632]: I1012 09:50:39.192216     632 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 12 09:50:39 localhost k3s[632]: I1012 09:50:39.193557     632 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 12 09:50:39 localhost k3s[632]: time="2021-10-12T09:50:39.302624417+02:00" level=info msg="Waiting for API server to become available"
Oct 12 09:50:39 localhost k3s[632]: time="2021-10-12T09:50:39.304924966+02:00" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --profiling=false --secure-port=0"
Oct 12 09:50:39 localhost k3s[632]: time="2021-10-12T09:50:39.311278587+02:00" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Oct 12 09:50:39 localhost k3s[632]: time="2021-10-12T09:50:39.387648492+02:00" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
Oct 12 09:50:39 localhost k3s[632]: time="2021-10-12T09:50:39.389777164+02:00" level=info msg="To join node to cluster: k3s agent -s https://192.168.1.241:6443 -t ${NODE_TOKEN}"
Oct 12 09:50:39 localhost k3s[632]: time="2021-10-12T09:50:39.403156705+02:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Oct 12 09:50:39 localhost k3s[632]: time="2021-10-12T09:50:39.403437698+02:00" level=info msg="Run: k3s kubectl"
Oct 12 09:50:39 localhost k3s[632]: time="2021-10-12T09:50:39.406444350+02:00" level=info msg="Module overlay was already loaded"
Oct 12 09:50:39 localhost k3s[632]: time="2021-10-12T09:50:39.491931920+02:00" level=info msg="Module br_netfilter was already loaded"
Oct 12 09:50:40 localhost k3s[632]: time="2021-10-12T09:50:40.298395404+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:40 http: TLS handshake error from 127.0.0.1:42416: remote error: tls: bad certificate"
Oct 12 09:50:40 localhost k3s[632]: time="2021-10-12T09:50:40.490691659+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:40 http: TLS handshake error from 127.0.0.1:42422: remote error: tls: bad certificate"
Oct 12 09:50:40 localhost k3s[632]: time="2021-10-12T09:50:40.929104981+02:00" level=info msg="certificate CN=noldork3sm1 signed by CN=k3s-server-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:40 +0000 UTC"
Oct 12 09:50:41 localhost k3s[632]: time="2021-10-12T09:50:41.042735790+02:00" level=info msg="certificate CN=system:node:noldork3sm1,O=system:nodes signed by CN=k3s-client-ca@1633964990: notBefore=2021-10-11 15:09:50 +0000 UTC notAfter=2022-10-12 07:50:41 +0000 UTC"
Oct 12 09:50:41 localhost k3s[632]: time="2021-10-12T09:50:41.513446714+02:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Oct 12 09:50:41 localhost k3s[632]: time="2021-10-12T09:50:41.518725423+02:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Oct 12 09:50:42 localhost k3s[632]: time="2021-10-12T09:50:42.523940708+02:00" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
Oct 12 09:50:42 localhost k3s[632]: time="2021-10-12T09:50:42.887651530+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:42 http: TLS handshake error from 192.168.1.247:33148: remote error: tls: bad certificate"
Oct 12 09:50:43 localhost k3s[632]: time="2021-10-12T09:50:43.143895534+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:43 http: TLS handshake error from 192.168.1.244:47416: remote error: tls: bad certificate"
Oct 12 09:50:43 localhost k3s[632]: time="2021-10-12T09:50:43.527023395+02:00" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
Oct 12 09:50:43 localhost k3s[632]: time="2021-10-12T09:50:43.738975714+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:43 http: TLS handshake error from 192.168.1.246:46354: remote error: tls: bad certificate"
Oct 12 09:50:44 localhost k3s[632]: time="2021-10-12T09:50:44.205616079+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:44 http: TLS handshake error from 192.168.1.245:59208: remote error: tls: bad certificate"
Oct 12 09:50:44 localhost k3s[632]: time="2021-10-12T09:50:44.529604756+02:00" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
Oct 12 09:50:45 localhost k3s[632]: time="2021-10-12T09:50:45.543501543+02:00" level=info msg="Containerd is now running"
Oct 12 09:50:46 localhost k3s[632]: time="2021-10-12T09:50:46.249046639+02:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Oct 12 09:50:46 localhost k3s[632]: time="2021-10-12T09:50:46.408937228+02:00" level=info msg="Handling backend connection request [noldork3sm1]"
Oct 12 09:50:46 localhost k3s[632]: time="2021-10-12T09:50:46.415786627+02:00" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/912de41a65c99bc4d50bbb78e6106f3acbf3a70b8dead77b4c4ebc6755b4f9d6/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=noldork3sm1 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --register-with-taints=k3s-controlplane=true:NoSchedule --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Oct 12 09:50:46 localhost k3s[632]: time="2021-10-12T09:50:46.422345502+02:00" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=noldork3sm1 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Oct 12 09:50:46 localhost k3s[632]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Oct 12 09:50:46 localhost k3s[632]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Oct 12 09:50:46 localhost k3s[632]: W1012 09:50:46.425250     632 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Oct 12 09:50:46 localhost kernel: [  315.069768] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
Oct 12 09:50:46 localhost kernel: [  315.069861] IPVS: Connection hash table configured (size=4096, memory=32Kbytes)
Oct 12 09:50:46 localhost kernel: [  315.070430] IPVS: ipvs loaded.
Oct 12 09:50:46 localhost systemd[1]: Started Kubernetes systemd probe.
Oct 12 09:50:46 localhost k3s[632]: I1012 09:50:46.619545     632 server.go:412] Version: v1.20.4+k3s1
Oct 12 09:50:46 localhost systemd[1]: run-r4b9e86f0973b4d05acbe5ed7a8b25287.scope: Succeeded.
Oct 12 09:50:46 localhost kernel: [  315.164783] IPVS: [rr] scheduler registered.
Oct 12 09:50:46 localhost kernel: [  315.254221] IPVS: [wrr] scheduler registered.
Oct 12 09:50:46 localhost kernel: [  315.351629] IPVS: [sh] scheduler registered.
Oct 12 09:50:46 localhost k3s[632]: time="2021-10-12T09:50:46.935087950+02:00" level=info msg="Node CIDR assigned for: noldork3sm1"
Oct 12 09:50:46 localhost k3s[632]: I1012 09:50:46.938926     632 flannel.go:92] Determining IP address of default interface
Oct 12 09:50:46 localhost k3s[632]: I1012 09:50:46.944379     632 flannel.go:105] Using interface with name eth0 and address 192.168.1.241
Oct 12 09:50:46 localhost k3s[632]: I1012 09:50:46.979437     632 kube.go:117] Waiting 10m0s for node controller to sync
Oct 12 09:50:46 localhost k3s[632]: I1012 09:50:46.980848     632 kube.go:300] Starting kube subnet manager
Oct 12 09:50:47 localhost k3s[632]: E1012 09:50:47.510692     632 node.go:161] Failed to retrieve node info: nodes "noldork3sm1" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 12 09:50:47 localhost k3s[632]: I1012 09:50:47.669551     632 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
Oct 12 09:50:47 localhost k3s[632]: I1012 09:50:47.988355     632 kube.go:124] Node controller sync successful
Oct 12 09:50:47 localhost k3s[632]: I1012 09:50:47.988913     632 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Oct 12 09:50:48 localhost k3s[632]: time="2021-10-12T09:50:48.004831823+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:48 http: TLS handshake error from 192.168.1.247:33160: remote error: tls: bad certificate"
Oct 12 09:50:48 localhost systemd-udevd[666]: Using default interface naming scheme 'v240'.
Oct 12 09:50:48 localhost dhcpcd[375]: flannel.1: waiting for carrier
Oct 12 09:50:48 localhost dhcpcd[375]: flannel.1: carrier acquired
Oct 12 09:50:48 localhost dhcpcd[375]: flannel.1: IAID 8b:13:ff:a9
Oct 12 09:50:48 localhost dhcpcd[375]: flannel.1: adding address fe80::fe72:b4fc:1964:9e7b
Oct 12 09:50:48 localhost avahi-daemon[272]: Joining mDNS multicast group on interface flannel.1.IPv6 with address fe80::fe72:b4fc:1964:9e7b.
Oct 12 09:50:48 localhost avahi-daemon[272]: New relevant interface flannel.1.IPv6 for mDNS.
Oct 12 09:50:48 localhost avahi-daemon[272]: Registering new address record for fe80::fe72:b4fc:1964:9e7b on flannel.1.*.
Oct 12 09:50:48 localhost k3s[632]: time="2021-10-12T09:50:48.266244921+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:48 http: TLS handshake error from 192.168.1.244:47428: remote error: tls: bad certificate"
Oct 12 09:50:48 localhost dhcpcd[375]: flannel.1: soliciting a DHCP lease
Oct 12 09:50:48 localhost k3s[632]: E1012 09:50:48.591356     632 node.go:161] Failed to retrieve node info: nodes "noldork3sm1" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 12 09:50:48 localhost k3s[632]: time="2021-10-12T09:50:48.920066076+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:48 http: TLS handshake error from 192.168.1.246:46366: remote error: tls: bad certificate"
Oct 12 09:50:48 localhost dhcpcd[375]: flannel.1: soliciting an IPv6 router
Oct 12 09:50:49 localhost k3s[632]: time="2021-10-12T09:50:49.399846740+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:49 http: TLS handshake error from 192.168.1.245:59220: remote error: tls: bad certificate"
Oct 12 09:50:49 localhost k3s[632]: time="2021-10-12T09:50:49.425302637+02:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Oct 12 09:50:50 localhost k3s[632]: E1012 09:50:50.804456     632 node.go:161] Failed to retrieve node info: nodes "noldork3sm1" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 12 09:50:51 localhost k3s[632]: time="2021-10-12T09:50:51.307140110+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.687038     632 nvidia.go:81] Error reading "/sys/bus/pci/devices/": open /sys/bus/pci/devices/: no such file or directory
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.697238     632 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.700087     632 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.700913     632 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu1/online: open /sys/devices/system/cpu/cpu1/online: no such file or directory
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.702676     632 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu2/online: open /sys/devices/system/cpu/cpu2/online: no such file or directory
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.704035     632 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu3/online: open /sys/devices/system/cpu/cpu3/online: no such file or directory
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.707642     632 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.707960     632 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.708126     632 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.708288     632 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Oct 12 09:50:52 localhost k3s[632]: E1012 09:50:52.708359     632 machine.go:72] Cannot read number of physical cores correctly, number of cores set to 0
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.711001     632 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.711228     632 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.711468     632 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Oct 12 09:50:52 localhost k3s[632]: W1012 09:50:52.711639     632 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Oct 12 09:50:52 localhost k3s[632]: E1012 09:50:52.712292     632 machine.go:86] Cannot read number of sockets correctly, number of sockets set to 0
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.719386     632 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.724010     632 container_manager_linux.go:287] container manager verified user specified cgroup-root exists: []
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.724187     632 container_manager_linux.go:292] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.725018     632 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.725679     632 container_manager_linux.go:323] [topologymanager] Initializing Topology Manager with none policy and container-level scope
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.725761     632 container_manager_linux.go:328] Creating device plugin manager: true
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.731082     632 kubelet.go:265] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.731281     632 kubelet.go:276] Watching apiserver
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.735622     632 kubelet.go:453] Kubelet client is not nil
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.754710     632 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.3-k3s3, apiVersion: v1alpha2
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.781972     632 server.go:1177] Started kubelet
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.796157     632 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.819044     632 server.go:148] Starting to listen on 0.0.0.0:10250
Oct 12 09:50:52 localhost k3s[632]: E1012 09:50:52.841411     632 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
Oct 12 09:50:52 localhost k3s[632]: E1012 09:50:52.843318     632 kubelet.go:1292] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.844717     632 volume_manager.go:271] Starting Kubelet Volume Manager
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.856131     632 server.go:410] Adding debug handlers to kubelet server.
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.844833     632 desired_state_of_world_populator.go:142] Desired state populator starts to run
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.961717     632 kubelet.go:449] kubelet nodes not sync
Oct 12 09:50:52 localhost k3s[632]: I1012 09:50:52.962071     632 kubelet.go:449] kubelet nodes not sync
Oct 12 09:50:53 localhost k3s[632]: I1012 09:50:53.072986     632 kubelet_node_status.go:71] Attempting to register node noldork3sm1
Oct 12 09:50:53 localhost k3s[632]: time="2021-10-12T09:50:53.200509035+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:53 http: TLS handshake error from 192.168.1.247:33168: remote error: tls: bad certificate"
Oct 12 09:50:53 localhost dhcpcd[375]: flannel.1: probing for an IPv4LL address
Oct 12 09:50:53 localhost k3s[632]: time="2021-10-12T09:50:53.481619576+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:53 http: TLS handshake error from 192.168.1.244:47436: remote error: tls: bad certificate"
Oct 12 09:50:53 localhost k3s[632]: I1012 09:50:53.963432     632 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.42.0.0/24
Oct 12 09:50:53 localhost k3s[632]: I1012 09:50:53.967871     632 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.0.0/24
Oct 12 09:50:54 localhost k3s[632]: time="2021-10-12T09:50:54.092599004+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:54 http: TLS handshake error from 192.168.1.246:46374: remote error: tls: bad certificate"
Oct 12 09:50:54 localhost k3s[632]: I1012 09:50:54.387068     632 cpu_manager.go:193] [cpumanager] starting with none policy
Oct 12 09:50:54 localhost k3s[632]: I1012 09:50:54.387342     632 cpu_manager.go:194] [cpumanager] reconciling every 10s
Oct 12 09:50:54 localhost k3s[632]: I1012 09:50:54.387562     632 state_mem.go:36] [cpumanager] initializing new in-memory state store
Oct 12 09:50:54 localhost k3s[632]: I1012 09:50:54.416756     632 policy_none.go:43] [cpumanager] none policy: Start
Oct 12 09:50:54 localhost kernel: [  323.002497] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to [email protected] if you depend on this functionality.
Oct 12 09:50:54 localhost k3s[632]: time="2021-10-12T09:50:54.554026244+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:54 http: TLS handshake error from 192.168.1.245:59228: remote error: tls: bad certificate"
Oct 12 09:50:54 localhost k3s[632]: W1012 09:50:54.570526     632 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Oct 12 09:50:54 localhost k3s[632]: I1012 09:50:54.578590     632 plugin_manager.go:114] Starting Kubelet Plugin Manager
Oct 12 09:50:55 localhost k3s[632]: I1012 09:50:55.140494     632 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
Oct 12 09:50:55 localhost k3s[632]: I1012 09:50:55.142136     632 status_manager.go:158] Starting to sync pod status with apiserver
Oct 12 09:50:55 localhost k3s[632]: I1012 09:50:55.142386     632 kubelet.go:1829] Starting kubelet main sync loop.
Oct 12 09:50:55 localhost k3s[632]: E1012 09:50:55.142771     632 kubelet.go:1853] skipping pod synchronization - PLEG is not healthy: pleg has yet to be successful
Oct 12 09:50:55 localhost k3s[632]: I1012 09:50:55.303044     632 reconciler.go:157] Reconciler: start to sync state
Oct 12 09:50:55 localhost k3s[632]: E1012 09:50:55.387335     632 node.go:161] Failed to retrieve node info: nodes "noldork3sm1" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 12 09:50:56 localhost k3s[632]: time="2021-10-12T09:50:56.366595825+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 12 09:50:58 localhost k3s[632]: time="2021-10-12T09:50:58.457881986+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:58 http: TLS handshake error from 192.168.1.247:33176: remote error: tls: bad certificate"
Oct 12 09:50:58 localhost k3s[632]: time="2021-10-12T09:50:58.685700631+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:58 http: TLS handshake error from 192.168.1.244:47444: remote error: tls: bad certificate"
Oct 12 09:50:59 localhost dhcpcd[375]: flannel.1: using IPv4LL address 169.254.63.245
Oct 12 09:50:59 localhost dhcpcd[375]: flannel.1: adding route to 169.254.0.0/16
Oct 12 09:50:59 localhost avahi-daemon[272]: Joining mDNS multicast group on interface flannel.1.IPv4 with address 169.254.63.245.
Oct 12 09:50:59 localhost avahi-daemon[272]: New relevant interface flannel.1.IPv4 for mDNS.
Oct 12 09:50:59 localhost avahi-daemon[272]: Registering new address record for 169.254.63.245 on flannel.1.IPv4.
Oct 12 09:50:59 localhost k3s[632]: time="2021-10-12T09:50:59.232570464+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:59 http: TLS handshake error from 192.168.1.246:46382: remote error: tls: bad certificate"
Oct 12 09:50:59 localhost k3s[632]: time="2021-10-12T09:50:59.731797313+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:50:59 http: TLS handshake error from 192.168.1.245:59236: remote error: tls: bad certificate"
Oct 12 09:51:00 localhost k3s[632]: time="2021-10-12T09:51:00.449244453+02:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Oct 12 09:51:01 localhost k3s[632]: time="2021-10-12T09:51:01.431261772+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 12 09:51:01 localhost dhcpcd[375]: flannel.1: no IPv6 Routers available
Oct 12 09:51:03 localhost k3s[632]: time="2021-10-12T09:51:03.529575954+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:51:03 http: TLS handshake error from 192.168.1.247:33184: remote error: tls: bad certificate"
Oct 12 09:51:03 localhost k3s[632]: I1012 09:51:03.648621     632 trace.go:205] Trace[291309165]: "Create" url:/api/v1/namespaces/default/events,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (12-Oct-2021 09:50:53.625) (total time: 10022ms):
Oct 12 09:51:03 localhost k3s[632]: Trace[291309165]: [10.022735649s] [10.022735649s] END
Oct 12 09:51:03 localhost k3s[632]: I1012 09:51:03.702825     632 trace.go:205] Trace[362198050]: "Create" url:/api/v1/nodes,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (12-Oct-2021 09:50:53.690) (total time: 10011ms):
Oct 12 09:51:03 localhost k3s[632]: Trace[362198050]: [10.011574264s] [10.011574264s] END
Oct 12 09:51:03 localhost k3s[632]: E1012 09:51:03.732215     632 controller.go:187] failed to update lease, error: Put "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/noldork3sm1?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Oct 12 09:51:03 localhost k3s[632]: time="2021-10-12T09:51:03.793390440+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:51:03 http: TLS handshake error from 192.168.1.244:47452: remote error: tls: bad certificate"
Oct 12 09:51:03 localhost k3s[632]: E1012 09:51:03.877856     632 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"noldork3sm1.16ad3920c922ab82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"noldork3sm1", UID:"noldork3sm1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"noldork3sm1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0516d772e961382, ext:35281987218, loc:(*time.Location)(0x5917578)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0516d772e961382, ext:35281987218, loc:(*time.Location)(0x5917578)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "noldork3sm1.16ad3920c922ab82" is forbidden: not yet ready to handle request' (will not retry!)
Oct 12 09:51:03 localhost k3s[632]: E1012 09:51:03.884750     632 kubelet_node_status.go:93] Unable to register node "noldork3sm1" with API server: nodes "noldork3sm1" is forbidden: not yet ready to handle request
Oct 12 09:51:03 localhost k3s[632]: I1012 09:51:03.975815     632 trace.go:205] Trace[1870895053]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/noldork3sm1,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (12-Oct-2021 09:50:53.834) (total time: 10140ms):
Oct 12 09:51:03 localhost k3s[632]: Trace[1870895053]: [10.140541076s] [10.140541076s] END
Oct 12 09:51:03 localhost k3s[632]: I1012 09:51:03.978454     632 trace.go:205] Trace[1773855934]: "GuaranteedUpdate etcd3" type:*coordination.Lease (12-Oct-2021 09:50:53.837) (total time: 10141ms):
Oct 12 09:51:03 localhost k3s[632]: Trace[1773855934]: [10.141074568s] [10.141074568s] END
Oct 12 09:51:04 localhost k3s[632]: I1012 09:51:04.098455     632 kubelet_node_status.go:71] Attempting to register node noldork3sm1
Oct 12 09:51:04 localhost k3s[632]: time="2021-10-12T09:51:04.376172035+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:51:04 http: TLS handshake error from 192.168.1.246:46390: remote error: tls: bad certificate"
Oct 12 09:51:04 localhost k3s[632]: E1012 09:51:04.454604     632 node.go:161] Failed to retrieve node info: nodes "noldork3sm1" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 12 09:51:04 localhost k3s[632]: time="2021-10-12T09:51:04.861711413+02:00" level=info msg="Cluster-Http-Server 2021/10/12 09:51:04 http: TLS handshake error from 192.168.1.245:59244: remote error: tls: bad certificate"
Oct 12 09:51:06 localhost k3s[632]: time="2021-10-12T09:51:06.494210711+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 12 09:51:07 localhost k3s[632]: I1012 09:51:07.668790     632 trace.go:205] Trace[1767888587]: "GuaranteedUpdate etcd3" type:*core.Node (12-Oct-2021 09:50:47.629) (total time: 20039ms):
Oct 12 09:51:07 localhost k3s[632]: Trace[1767888587]: [20.039464983s] [20.039464983s] END
Oct 12 09:51:07 localhost k3s[632]: I1012 09:51:07.673555     632 trace.go:205] Trace[1321424051]: "Update" url:/api/v1/nodes/noldork3sm1,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (12-Oct-2021 09:50:47.625) (total time: 20048ms):
Oct 12 09:51:07 localhost k3s[632]: Trace[1321424051]: [20.048199909s] [20.048199909s] END
Oct 12 09:51:07 localhost k3s[632]: time="2021-10-12T09:51:07.755255101+02:00" level=info msg="Failed to update node noldork3sm1: nodes \"noldork3sm1\" is forbidden: not yet ready to handle request"
Oct 12 09:51:08 localhost k3s[632]: I1012 09:51:08.332481     632 trace.go:205] Trace[935516519]: "GuaranteedUpdate etcd3" type:*core.Node (12-Oct-2021 09:50:48.286) (total time: 20045ms):
Oct 12 09:51:08 localhost k3s[632]: Trace[935516519]: [20.04551378s] [20.04551378s] END
Oct 12 09:51:08 localhost k3s[632]: I1012 09:51:08.333219     632 trace.go:205] Trace[108806136]: "Patch" url:/api/v1/nodes/noldork3sm1/status,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (12-Oct-2021 09:50:48.285) (total time: 20047ms):
Oct 12 09:51:08 localhost k3s[632]: Trace[108806136]: ---"About to apply patch" 10008ms (09:50:00.318)
Oct 12 09:51:08 localhost k3s[632]: Trace[108806136]: [20.04717218s] [20.04717218s] END
Oct 12 09:51:08 localhost k3s[632]: time="2021-10-12T09:51:08.350814021+02:00" level=fatal msg="flannel exited: failed to acquire lease: nodes \"noldork3sm1\" is forbidden: not yet ready to handle request"
Oct 12 09:51:08 localhost systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Oct 12 09:51:08 localhost systemd[1]: k3s.service: Failed with result 'exit-code'.
Oct 12 09:51:08 localhost systemd[1]: Failed to start Lightweight Kubernetes.

@i5Js
Copy link
Author

i5Js commented Oct 12, 2021

I've installed the new version in the master1, and after big troubleshooting, I've all the pods working. Honestly, I don't what was the problem, but it's working now. @brandond just decide if you want to keep this ticket opened.

Cheers.

@brandond
Copy link
Member

Oct 12 09:50:59 localhost avahi-daemon[272]: Joining mDNS multicast group on interface flannel.1.IPv4 with address 169.254.63.245.
Oct 12 09:50:59 localhost avahi-daemon[272]: New relevant interface flannel.1.IPv4 for mDNS.
Oct 12 09:50:59 localhost avahi-daemon[272]: Registering new address record for 169.254.63.245 on flannel.1.IPv4.

I have seen Avahi trying to attach itself to all the pod interfaces cause really bad behavior. Could you try disabling/uninstalling avahi?

@i5Js
Copy link
Author

i5Js commented Oct 12, 2021

Sure thing @brandond , I’ll many thanks

@mglants
Copy link

mglants commented Oct 25, 2021

Same issue here what kind of logs should i share?

Oct 26 02:53:42 master1 k3s[5697]: I1026 02:53:42.873156 5697 trace.go:205] Trace[769458025]: "Delete" url:/apis/apiregistration.k8s.io/v1/apiservices/v1alpha2.acme.cert-manager.io,user-agent:k3s/v1.21.5+k3s2 (linux/arm64) kubernetes/724ef70,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (26-Oct-2021 02:53:08.589) (total time: 34283ms): Oct 26 02:53:42 master1 k3s[5697]: Trace[769458025]: [34.283282963s] [34.283282963s] END Oct 26 02:53:43 master1 k3s[5697]: time="2021-10-26T02:53:43.020807948+03:00" level=fatal msg="flannel exited: failed to acquire lease: Timeout: request did not complete within requested timeout context deadline exceeded" Oct 26 02:53:43 master1 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE Oct 26 02:53:43 master1 systemd[1]: k3s.service: Failed with result 'exit-code'. Oct 26 02:53:43 master1 systemd[1]: Failed to start Lightweight Kubernetes.

@brandond
Copy link
Member

@mglants have you tried any of the several things suggested in the comments above?

@mglants
Copy link

mglants commented Oct 26, 2021

@brandond SOLVED IT by vacuuming the database. My SQLite wal file was over 80Gigs (Lol). After vacuuming it's sits at 1gb. So timeouting of reading operations because DB is struggling to handle 80Gigs of selects.
I think the growth of wal file is corresponding to a bad database shutdown. Mine k3s was crashing since rpi reboot
After replacing db k3s is up and running in under a minute
image

@brandond
Copy link
Member

That sounds like #4044

@jdbogdan
Copy link

jdbogdan commented Oct 28, 2021

10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.699457994+08:00" level=info msg="Running kube-controller-manager --address=127.
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.699950987+08:00" level=info msg="Running cloud-controller-manager --allocate-no
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.701182748+08:00" level=info msg="Node token is available at /var/lib/rancher/k3
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.701208396+08:00" level=info msg="To join node to cluster: k3s agent -s https://
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.702395297+08:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.702430210+08:00" level=info msg="Run: k3s kubectl"
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.743306338+08:00" level=info msg="Cluster-Http-Server 2021/10/28 15:30:40 http:
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.751838802+08:00" level=info msg="Cluster-Http-Server 2021/10/28 15:30:40 http:
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.766152519+08:00" level=info msg="certificate CN=remote-h-pre-01 signed by CN=k3
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.770460680+08:00" level=info msg="certificate CN=system:node:remote-h-pre-01,O=s
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.811284502+08:00" level=info msg="Module overlay was already loaded"
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.811331770+08:00" level=info msg="Module nf_conntrack was already loaded"
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.811347315+08:00" level=info msg="Module br_netfilter was already loaded"
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.811361493+08:00" level=info msg="Module iptable_nat was already loaded"
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.814650740+08:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/age
10月 28 15:30:40 Remote-h-pre-01 k3s[58325]: time="2021-10-28T15:30:40.814806760+08:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/age
10月 28 15:30:41 Remote-h-pre-01 k3s[58325]: containerd: exit status 1
10月 28 15:30:41 Remote-h-pre-01 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
10月 28 15:30:41 Remote-h-pre-01 systemd[1]: Failed to start Lightweight Kubernetes.
-- Subject: Unit k3s.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit k3s.service has failed.
--
-- The result is failed.
10月 28 15:30:41 Remote-h-pre-01 systemd[1]: k3s.service: Unit entered failed state.
10月 28 15:30:41 Remote-h-pre-01 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=k3s comm="systemd" exe="/lib/systemd/sys
10月 28 15:30:41 Remote-h-pre-01 systemd[1]: k3s.service: Failed with result 'exit-code'.

@brandond
Copy link
Member

brandond commented Oct 28, 2021

@jdbogdan Your logs are truncated, both in line length and number of lines. Please run journalctl -u k3s &>k3s.log and attach the resulting file to a comment. Also, from what little I can see your problem is unrelated to the issue reported here so I suggest opening a new issue describing what's going on.

@stale
Copy link

stale bot commented Apr 27, 2022

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label Apr 27, 2022
@stale stale bot closed this as completed May 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants