Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with k8s.io/docs/cluster kube-flannel #12664

Closed
1 of 2 tasks
tux1980 opened this issue Feb 15, 2019 · 7 comments
Closed
1 of 2 tasks

Issue with k8s.io/docs/cluster kube-flannel #12664

tux1980 opened this issue Feb 15, 2019 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@tux1980
Copy link

tux1980 commented Feb 15, 2019

This is a...

  • Feature Request
  • Bug Report

Problem:
Hi Team,
don't now what do to after spending the all day with troubleshooting with the following problem:
I deployed a k8s Cluster on three centos7 machines. It is evidently running fine:

kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.localdomain Ready master 11h v1.13.3
k8s-node01.localdomain Ready 157m v1.13.3
k8s-node02.localdomain Ready 160m v1.13.3

But.....
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-nmc6k 1/1 Running 3 11h
kube-system coredns-86c58d9df4-wkhpb 1/1 Running 3 11h
kube-system etcd-k8s-master.localdomain 1/1 Running 3 11h
kube-system kube-apiserver-k8s-master.localdomain 1/1 Running 3 11h
kube-system kube-controller-manager-k8s-master.localdomain 1/1 Running 3 11h
kube-system kube-flannel-ds-amd64-6w4jk 1/1 Running 4 11h
kube-system kube-flannel-ds-amd64-llqcb 0/1 CrashLoopBackOff 39 168m
kube-system kube-flannel-ds-amd64-xb9rv 0/1 CrashLoopBackOff 40 171m

kube-system kube-proxy-58jzn 1/1 Running 1 171m
kube-system kube-proxy-t82zp 1/1 Running 1 168m
kube-system kube-proxy-zh9dt 1/1 Running 3 11h
kube-system kube-scheduler-k8s-master.localdomain 1/1 Running 3 11h

The network overlay flannel services for node01 and node02 comes up for seconds and then... BOOOM... I always get the message "CrashLoopBackOff"

If anybody need more information (eg container API logs,...), I will provide any information you need to help me out of this mess.

Proposed Solution:

Page to Update:
https://kubernetes.io/...

@sftim
Copy link
Contributor

sftim commented Feb 19, 2019

There is an issue tracker on GitHub for the Kubernetes project itself; however, for support requests the project wants to signpost you towards:

It sounds like those are places where you're more likely to find help and answers.

@sftim
Copy link
Contributor

sftim commented Feb 19, 2019

Specifically for Flannel, you can find a troubleshooting guide at https://github.com/coreos/flannel/blob/master/Documentation/troubleshooting.md
Red Hat, the organisation behind Flannel, also provides general technical support (as a paid-for service).

@tux1980
Copy link
Author

tux1980 commented Feb 19, 2019

Thanks for getting in touch with me. Just a couple of minutes ago I finished with my work and resolved the problem myself. It is all running fine now with a working Flannel Overlay network either.
I did it the windows way - re-install, new deployment ;)
Thanks!
Wolfgang

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 20, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 19, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants