Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fedora 33 apiserver not starting #9982

Closed
zevinar opened this issue Dec 17, 2020 · 11 comments
Closed

fedora 33 apiserver not starting #9982

zevinar opened this issue Dec 17, 2020 · 11 comments
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. os/linux triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@zevinar
Copy link

zevinar commented Dec 17, 2020

Hi,
I'm fairly new to kubernetes. (running on fedora 33)

Trying to start a minikube with:
minikube start --extra-config=kubelet.cgroup-driver=systemd

here is the output: (container log attached as well)

😄  minikube v1.15.1 on Fedora 33
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=7900MB) ...
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
    ▪ kubelet.cgroup-driver=systemd
💢  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W1217 15:00:49.626238     763 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172

container.log
minikube.log

@tstromberg tstromberg changed the title Can't Start Minikube fedora 33 with kubelet.cgroup-driver=systemd: kubelet isn't running or healthy. Dec 17, 2020
@tstromberg
Copy link
Contributor

Can you please include the output of minikube logs? Thanks!

@tstromberg tstromberg added kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. labels Dec 17, 2020
@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 17, 2020

Also please make sure that cgroups v1 is enabled, if using Docker 19.03

@afbjorklund afbjorklund added co/docker-driver Issues related to kubernetes in container os/linux labels Dec 17, 2020
@zevinar
Copy link
Author

zevinar commented Dec 17, 2020

Can you please include the output of minikube logs? Thanks!

Attached,
Many Thanks !

@zevinar
Copy link
Author

zevinar commented Dec 20, 2020

Also please make sure that cgroups v1 is enabled, if using Docker 19.03

I believe both v1 & v2 are enabled.
"grep cgroup /proc/filesystems" output is:
nodev cgroup
nodev cgroup2
Also docker hello-world seems to be working fine

@zevinar
Copy link
Author

zevinar commented Dec 20, 2020

[Update]
I've upgraded to minikube 1.16.0 (with docker 20.10.0).
I try to start minikube: minikube start

Looks like I'm having issues with the apiSever - I see connection refused messages in the console as well as Stopped status.
here is an output of minikube status
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured
timeToStop: Nonexistent

Attached are: console output, minikube logs, container log

console.log
container.log
minikube.log

@zevinar zevinar changed the title fedora 33 with kubelet.cgroup-driver=systemd: kubelet isn't running or healthy. fedora 33 apiserver not starting Dec 20, 2020
@MoSattler
Copy link

same problem here. Weirdly enough, it worked on the first run, but noy anymore ever since.

@mazzystr
Copy link

mazzystr commented Jan 28, 2021

I run the following:

  • minikube version: v1.17.0
  • crictl version
    Version: 0.1.0
    RuntimeName: cri-o
    RuntimeVersion: 1.20.0
    RuntimeApiVersion: v1alpha1

sudo minikube start --driver=none --container-runtime=cri-o --feature-gates="LocalStorageCapacityIsolation=false" and
su - minikube && minikube start --driver=podman --container-runtime=cri-o --feature-gates="LocalStorageCapacityIsolation=false" seem to work for me. curl https://api/healthz yields ok

As soon as I add --api-name= and --api-names= minikube fails to start.

v7 verbosity show alot of these errors...

❌  Problems detected in kubelet:
    Jan 27 16:30:36 blah kubelet[732071]: E0127 16:30:36.811536  732071 reflector.go:138] object-"kube-system"/"kube-proxy-token-dzgp7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-dzgp7" is forbidden: User "system:node:blah" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'blah' and this object
    Jan 27 16:30:36 blah kubelet[732071]: E0127 16:30:36.811602  732071 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:blah" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'blah' and this object

If I move the dns A record of api to my hosts ip and manually set --api-name=api.yokel.local minikube start will succeed and curl https://api/healthz yields ok. When I move the A record back to my haproxy curl returns ssl errors.

@fosterseth
Copy link

fosterseth commented Feb 6, 2021

I ran into this same problem (kubelet not running) -- I reinstalled fedora 33 using ext4 filesystem, instead of the default btrfs. After reinstall I didn't need to do anything extra like enable cgroupsv1 or use a different minikube driver. minikube start just worked out of the box.

@priyawadhwa
Copy link

Ah yah so it looks like minikube won't work with btrfs because kubeadm doesn't support it -- #6167,

@zevinar perhaps that is your issue? if you could provide the output of docker info that would help.

@spowelljr
Copy link
Member

Hi @zevinar, we haven't heard back from you, do you still have this issue?

There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but feel free to reopen when you feel ready to provide more information.

@garyburgmann
Copy link

garyburgmann commented Jun 13, 2021

for anyone else that lands here, I resolved this for myself on Fedora 34 this morning. using the kvm2 driver was key as I believe there are btrfs support issues:

# install packages for kvm2 driver
sudo dnf install @virtualization
sudo systemctl start libvirtd
sudo systemctl enable libvirtd
sudo usermod -aG libvirt $(whoami)
minikube config set driver kvm2
minikube start

I tested this with both the binary and rpm installation methods

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. os/linux triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

9 participants