Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

None: Error Port 8443 is in use (conflict with sshd on CentOS) #4473

Closed
birbird opened this issue Jun 12, 2019 · 14 comments
Closed

None: Error Port 8443 is in use (conflict with sshd on CentOS) #4473

birbird opened this issue Jun 12, 2019 · 14 comments
Labels
cause/port-conflict Start failures due to port or other network conflict co/none-driver kind/support Categorizes issue or PR as a support question. os/linux

Comments

@birbird
Copy link

birbird commented Jun 12, 2019

# minikube start --vm-driver=none
πŸ˜„  minikube v1.1.1 on linux (amd64)
πŸ”₯  Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
🐳  Configuring environment for Kubernetes v1.14.3 on Docker 18.09.6
❌  Unable to load cached images: loading cached images: loading image /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: stat /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: no such file or directory
🚜  Pulling images ...
❌  Unable to pull images, which may be OK: running cmd: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: running command: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: exit status 1
πŸš€  Launching Kubernetes ... 

πŸ’£  Error starting cluster: cmd failed: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap

: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
 output: [init] Using Kubernetes version: v1.14.3
[preflight] Running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Hostname]: hostname "minikube" could not be reached
	[WARNING Hostname]: hostname "minikube": lookup minikube on [fe80::1%enp0s9]:53: no such host
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-8443]: Port 8443 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new

Before this I have already minikube delete and kill the process on 8443

# minikube delete
πŸ”„  Uninstalling Kubernetes v1.14.3 using kubeadm ...
πŸ”₯  Deleting "minikube" from none ...
πŸ’”  The "minikube" cluster has been deleted.
# lsof -i:8443
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd    68686 root    3u  IPv4 858650      0t0  TCP *:pcsync-https (LISTEN)
# kill -9 68686

I ran this on a virtualbox virtual machine, the guest OS is CentOS

# minikube version
minikube version: v1.1.1
# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: tls: first record does not look like a TLS handshake
# lsb_release -a
LSB Version:	:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.3.1611 (Core) 
Release:	7.3.1611
Codename:	Core

Any help will be highly appreciated!

@medyagh
Copy link
Member

medyagh commented Jun 15, 2019

Thank for sharing your experience, I am not sure if we have tested none driver with centos,
our docs says
"The none driver supports releases of Debian, Ubuntu, and Fedora"
https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md

do you mind trying it on a recent version of debian or ubuntu or fedora to see if you still have that issue ?

@medyagh medyagh changed the title [ERROR Port-8443]: Port 8443 is in use None: [ERROR Port-8443]: Port 8443 is in use Jun 15, 2019
@medyagh medyagh changed the title None: [ERROR Port-8443]: Port 8443 is in use None: Error Port 8443 is in use Jun 15, 2019
@birbird
Copy link
Author

birbird commented Jun 18, 2019

On an ubuntu, seems OK

$ sudo /usr/local/bin/minikube start --vm-driver=none
* minikube v1.1.1 on linux (amd64)
* Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
* Configuring environment for Kubernetes v1.14.3 on Docker 18.09.6
* Unable to load cached images: loading cached images: loading image /home/ubuntu/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: stat /home/ubuntu/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: no such file or directory
* Pulling images ...
* Launching Kubernetes ...
* Configuring local host environment ...

! The 'none' driver provides limited isolation and may reduce system security and reliability.
! For more information, see:
  - https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md

! kubectl and minikube configuration will be stored in /home/ubuntu
! To use kubectl or minikube commands as your own user, you may
! need to relocate them. For example, to overwrite your own settings:

  - sudo mv /home/ubuntu/.kube /home/ubuntu/.minikube $HOME
  - sudo chown -R $USER $HOME/.kube $HOME/.minikube

* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* Verifying: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 18, 2019

This issue is something separate:

Unable to load cached images: loading cached images: loading image /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: stat /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: no such file or directory

Unable to load cached images: loading cached images: loading image /home/ubuntu/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: stat /home/ubuntu/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: no such file or directory

Seems to happen on all distros.


Fix: #4522

@afbjorklund
Copy link
Collaborator

@medyagh : seems like a lot of users have issues when running directly on CentOS.

Most common causes seem to be SELinux (it needs to be disabled) and systemd.
Failing to update packages (yum update) is common, but normally not as fatal.

Currently we require this extra flag: --extra-config=kubelet.cgroup-driver=systemd

#2192 #2381


But it installs OK on CentOS, once one does the preparations and some prereqs.

$ minikube start --vm-driver=none --cache-images=false --extra-config=kubelet.cgroup-driver=systemd
* minikube v1.1.1 on linux (amd64)
* Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
* Configuring environment for Kubernetes v1.14.3 on Docker 18.09.6
  - kubelet.cgroup-driver=systemd
* Pulling images ...
* Launching Kubernetes ... 
* Configuring local host environment ...

! The 'none' driver provides limited isolation and may reduce system security and reliability.
! For more information, see:
  - https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md

! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may
! need to relocate them. For example, to overwrite your own settings:

  - sudo mv /root/.kube /root/.minikube $HOME
  - sudo chown -R $USER $HOME/.kube $HOME/.minikube

* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* Verifying: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
* For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

As per Kubernetes (Docker) docs, disabled SELinux and enabled bridge:

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl

Needed to install socat and conntrack-tools too, for the minimal install.

@afbjorklund
Copy link
Collaborator

@birbird : the actual port 8443 issue seems to be an unclean shutdown of previous start

You could try kubeadm reset, or something drastic like rebooting (or peeking at `netstat')

@birbird
Copy link
Author

birbird commented Jun 19, 2019

@afbjorklund Thanks a lot for your help.
kubeadm reset does not work for me. I tried reboot the virtual machine, does not work either.

# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0619 11:37:01.231891  100418 reset.go:234] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

# minikube start --vm-driver=none
πŸ˜„  minikube v1.1.1 on linux (amd64)
πŸ’‘  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
πŸ”„  Restarting existing none VM for "minikube" ...
βŒ›  Waiting for SSH access ...
🐳  Configuring environment for Kubernetes v1.14.3 on Docker 18.09.6
❌  Unable to load cached images: loading cached images: loading image /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: stat /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: no such file or directory
πŸ”„  Relaunching Kubernetes v1.14.3 using kubeadm ... 

πŸ’£  Error restarting cluster: waiting for apiserver: timed out waiting for the condition

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new
❌  Problems detected in "kube-apiserver":
    error: failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use
❌  Problems detected in "kube-addon-manager":
    error: unable to recognize "STDIN": Get https://localhost:8443/api?timeout=32s: tls: first record does not look like a TLS handshake
    error: unable to recognize "STDIN": Get https://localhost:8443/api?timeout=32s: tls: first record does not look like a TLS handshake
    error: unable to recognize "STDIN": Get https://localhost:8443/api?timeout=32s: tls: first record does not look like a TLS handshake

@medyagh
Copy link
Member

medyagh commented Jun 19, 2019

yeah the minikube stop is not really stopping reliablly I have been seeing it this flaky test, and I am been trying to improve it... #4495

I am working on fixing the flaky stop. I might end up using our fork of libmachine. since we eventually call lib to stop the vm. suggestions are welcome (on improving the minikube stop )

@afbjorklund
Copy link
Collaborator

I wonder if I mentioned the stopping of the firewalld ? If not, I will do now. Stop firewalld :-)

Another trick is reading the kubeadm call from the verbose output and call it again directly.

But in your case, "something" is listening on port 8443 - but it doesn't like to speak HTTPS

Maybe: netstat -lnp | grep 8443 would tell ?

@birbird
Copy link
Author

birbird commented Jun 23, 2019

@afbjorklund Thanks.
My firewalld is not runnig. A ssdh is on 8443

# systemctl status firewalld
● firewalld.service
   Loaded: masked (/dev/null; bad)
   Active: inactive (dead)
# netstat -lnp | grep 8443
tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      72337/sshd          

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 23, 2019

A ssdh is on 8443

Okay, there is your problem. You can't have both the kubernetes apiserver and a sshd on the same port.

I don't think we can have a better error than:

❌  Problems detected in "kube-apiserver":
    error: failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use

@birbird
Copy link
Author

birbird commented Jun 23, 2019

Hi, I did nothing to port 8443, I have no idea of the sshd process.
What I can tell is even kill the 8443 process, minikube does not work either, as in the previous comments.
The whole kill and restart logs are

# lsof -i:8443
COMMAND   PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
sshd    64599 root    3u  IPv4 3191625      0t0  TCP *:pcsync-https (LISTEN)
# kill -9 64599
# lsof -i:8443
# netstat -lnp | grep 8443
# minikube delete
πŸ”„  Uninstalling Kubernetes v1.14.3 using kubeadm ...
πŸ”₯  Deleting "minikube" from none ...
πŸ’”  The "minikube" cluster has been deleted.
# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0623 03:59:49.577747   84263 reset.go:234] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

# minikube start --vm-driver=none
πŸ˜„  minikube v1.1.1 on linux (amd64)
πŸ”₯  Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
🐳  Configuring environment for Kubernetes v1.14.3 on Docker 18.09.6
❌  Unable to load cached images: loading cached images: loading image /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: stat /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: no such file or directory
🚜  Pulling images ...
❌  Unable to pull images, which may be OK: running cmd: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: running command: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: exit status 1
πŸš€  Launching Kubernetes ... 

πŸ’£  Error starting cluster: cmd failed: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap

: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
 output: [init] Using Kubernetes version: v1.14.3
[preflight] Running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Hostname]: hostname "minikube" could not be reached
	[WARNING Hostname]: hostname "minikube": lookup minikube on [fe80::1%enp0s9]:53: no such host
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-8443]: Port 8443 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new

@afbjorklund
Copy link
Collaborator

It is not something that comes default with CentOS, so must have been added down the road.

Minikube traditionally runs on port 8443, even though Kubernetes normally runs on port 6443

And sshd is port 22 by default.

Yet another none driver issue...

@tstromberg tstromberg added the cause/port-conflict Start failures due to port or other network conflict label Jul 17, 2019
@tstromberg
Copy link
Contributor

minikube start --apiserver-port should be a valid workaround. Do you mind checking if it works for your case?

@tstromberg tstromberg changed the title None: Error Port 8443 is in use None: Error Port 8443 is in use (conflict with sshd on CentOS) Jul 17, 2019
@tstromberg tstromberg added triage/needs-information Indicates an issue needs more information in order to work on it. and removed kind/support Categorizes issue or PR as a support question. labels Jul 17, 2019
hswong3i added a commit to pantarei/ansible-role-minikube that referenced this issue Aug 10, 2019
@tstromberg tstromberg added needs-solution-message Issues where where offering a solution for an error would be helpful kind/support Categorizes issue or PR as a support question. and removed triage/needs-information Indicates an issue needs more information in order to work on it. labels Aug 23, 2019
@tstromberg
Copy link
Contributor

Closing as there is apparently a workaround. Noting that we should have a solution message recorded for this case. Thank you for filing this!

@tstromberg tstromberg removed the needs-solution-message Issues where where offering a solution for an error would be helpful label Apr 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cause/port-conflict Start failures due to port or other network conflict co/none-driver kind/support Categorizes issue or PR as a support question. os/linux
Projects
None yet
Development

No branches or pull requests

4 participants