-
Notifications
You must be signed in to change notification settings - Fork 14.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with k8s.io/docs/cluster kube-flannel #12664
Comments
There is an issue tracker on GitHub for the Kubernetes project itself; however, for support requests the project wants to signpost you towards:
It sounds like those are places where you're more likely to find help and answers. |
Specifically for Flannel, you can find a troubleshooting guide at https://github.com/coreos/flannel/blob/master/Documentation/troubleshooting.md |
Thanks for getting in touch with me. Just a couple of minutes ago I finished with my work and resolved the problem myself. It is all running fine now with a working Flannel Overlay network either. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is a...
Problem:
Hi Team,
don't now what do to after spending the all day with troubleshooting with the following problem:
I deployed a k8s Cluster on three centos7 machines. It is evidently running fine:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.localdomain Ready master 11h v1.13.3
k8s-node01.localdomain Ready 157m v1.13.3
k8s-node02.localdomain Ready 160m v1.13.3
But.....
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-nmc6k 1/1 Running 3 11h
kube-system coredns-86c58d9df4-wkhpb 1/1 Running 3 11h
kube-system etcd-k8s-master.localdomain 1/1 Running 3 11h
kube-system kube-apiserver-k8s-master.localdomain 1/1 Running 3 11h
kube-system kube-controller-manager-k8s-master.localdomain 1/1 Running 3 11h
kube-system kube-flannel-ds-amd64-6w4jk 1/1 Running 4 11h
kube-system kube-flannel-ds-amd64-llqcb 0/1 CrashLoopBackOff 39 168m
kube-system kube-flannel-ds-amd64-xb9rv 0/1 CrashLoopBackOff 40 171m
kube-system kube-proxy-58jzn 1/1 Running 1 171m
kube-system kube-proxy-t82zp 1/1 Running 1 168m
kube-system kube-proxy-zh9dt 1/1 Running 3 11h
kube-system kube-scheduler-k8s-master.localdomain 1/1 Running 3 11h
The network overlay flannel services for node01 and node02 comes up for seconds and then... BOOOM... I always get the message "CrashLoopBackOff"
If anybody need more information (eg container API logs,...), I will provide any information you need to help me out of this mess.
Proposed Solution:
Page to Update:
https://kubernetes.io/...
The text was updated successfully, but these errors were encountered: