Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with k8s.io/docs/admin/high-availability/ #6295

Closed
2 tasks
kumarganesh2814 opened this issue Nov 13, 2017 · 7 comments
Closed
2 tasks

Issue with k8s.io/docs/admin/high-availability/ #6295

kumarganesh2814 opened this issue Nov 13, 2017 · 7 comments

Comments

@kumarganesh2814
Copy link

This is a...

  • Feature Request
  • Bug Report

Problem:
Followed the steps from https://kubernetes.io/docs/admin/high-availability/
Was able to see 2 Container etcd-server came up

While running cluster health command from Master Node-1/2

kubectl exec etcd-server-kuber-poc-app1 etcdctl cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy

But From Master Node 1/2 see this error

kubectl exec etcd-server-kuber-poc-app2 etcdctl cluster-health

Error: client: etcd cluster is unavailable or misconfigured
error #0: client: endpoint http://127.0.0.1:4001 exceeded header timeout
error #1: dial tcp 127.0.0.1:2379: getsockopt: connection refused

cluster may be unhealthy: failed to list members

Please advise on this error

Proposed Solution:

Page to Update:
http://kubernetes.io/...

kubectl version

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:38:10Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Docker Version 17.10.0-ce
Kubernates Version v1.8.1
CentOS Version CentOS Linux release 7.4.1708 (Core)

@zacharysarah
Copy link
Contributor

@kumarganesh2814 👋 This issue sounds more like a request for support and less like an issue specifically for docs. You can bring your question to the #kubernetes-users channel in Kubernetes slack. You can also search resources like Stack Overflow for answers to similar questions.

If after seeking support you discover an issue with documentation, please feel free to reopen this issue.

@kumarganesh2814
Copy link
Author

2017-11-14 03:14:56.774499 I | discovery: cluster status check: error connecting to https://discovery.etcd.io, retrying in 18h12m16s

@kumarganesh2814
Copy link
Author

I see this error from logs of etcd pods any advise for this error

I put this into slack but no reply also dont see concreate answer to this issue anywhere

Best Regards
Ganesh

@tengqm
Copy link
Contributor

tengqm commented Nov 14, 2017

@kumarganesh2814 It looks like your second etcd instance failed to start or respond. You may want to try etcd documentation or fourm.

@kumarganesh2814
Copy link
Author

@tengqm
Thanks Man. For Pointer

I rebooted my VM and now I see state as below

NAMESPACE       NAME                                        READY     STATUS             RESTARTS   AGE
default         default-http-backend-66b447d9cf-h2rmp       1/1       Running            1          1d
default         etcd-server-kuber-poc-app1                  1/1       Running            1          1d
default         etcd-server-kuber-poc-app2                  0/1       CrashLoopBackOff   45         1d
default         ro-dashboard-59c9c54bd9-v9gwr               0/1       CrashLoopBackOff   42         12d
ingress-nginx   default-http-backend-66b447d9cf-s7zqp       1/1       Running            1          1d
ingress-nginx   nginx-ingress-controller-59fbff6875-jw686   0/1       CrashLoopBackOff   41         5d
kube-system     etcd-kuber-poc-app1                         1/1       Running            4          12d
kube-system     kube-apiserver-kuber-poc-app1               1/1       Running            3          12d
kube-system     kube-controller-manager-kuber-poc-app1      1/1       Running            2          12d
kube-system     kube-dns-545bc4bfd4-mm5zg                   1/3       CrashLoopBackOff   103        15d
kube-system     kube-flannel-ds-26shx                       1/1       Running            1          1d
kube-system     kube-flannel-ds-qn84v                       1/1       Running            1          15d
kube-system     kube-flannel-ds-zlrq7                       1/1       Running            4          15d
kube-system     kube-proxy-mwcdp                            1/1       Running            1          1d
kube-system     kube-proxy-w9mwl                            1/1       Running            1          15d
kube-system     kube-proxy-z8l8j                            1/1       Running            3          15d
kube-system     kube-scheduler-kuber-poc-app1               1/1       Running            3          12d
tomcat          default-http-backend-66b447d9cf-g8dk6       1/1       Running            1          1d
tomcat          tomcat-b7b984958-gqfxw                      1/1       Running            1          6h
tomcat          tomcat-b7b984958-gwpfq                      1/1       Running            1          6h

@kumarganesh2814
Copy link
Author

Error for kube-dns
failed to "StartContainer" for "dnsmasq" with CrashLoopBackOff

@kumarganesh2814
Copy link
Author

All Issues are resolved now I am running with 2 etcd-server but I see error like

2017-11-14 14:43:15.428194 I | discovery: cluster status check: error connecting to https://discovery.etcd.io, retrying in 2m8s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants