-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restarting host lets control plane stop working #2640
Comments
|
It is happening right now. I needed to restart the server because my virtualbox showed an error. The Pods are stuck and show this:
Here is the output of "kind export logs": As I read in the other issue you linked have something with my cluster having two worker nodes. I am really confused why such an error exists and why you write in the other issue that has no prioty. I would like to democratically vote for a higher priority. In any case thank you for your work. |
Daily story of pain, you can ignore that:
Luckily I can suspend the virtual machine over night preventing restarts. Might be my solution for now. |
I'm sorry, but I have very limited time to work on this right now, I review and approve PRs, triage bug reports, etc., but the Kubernetes project has pressing work elsewhere (e.g. we are exceeding our $3M/year GCP budget), and I have other obligations (e.g. writing peer feedback for performance reviews at work). This seems to be a duplicate of #2045, which has much more context on the situation. I highly recommend using a single node cluster as well, unless you have a strong concrete need for multiple nodes. They share the same host resources and multiple nodes is only implemented as a requirement for testing some kubernetes internals (see also: https://kind.sigs.k8s.io/docs/contributing/project-scope/). For development of Kubernetes applications, a single node is preferable and better supported. If you'd like to help resolve multi-node reboots, please take a look at #2045 |
Okay, thank you. I will somehow manage to work with it. |
Hello,
Situation:
I have a kind cluster in a virtual machine. I deployed the kubernetes dashboard in it via helm.
Problem:
If I shut down the VM and restart it the kind cluster controller seems to be stuck.
For example if I delete all pods in the "dashboard" namespace, they are not recreated. I expect the controller to recreate the desired state according to the deployments.
What I tried:
Question:
Does someone have an idea what I can do to debug this problem or why it is caused?
Currently I am completely reinstalling the virtual machine every time to make the cluster work again.
Please let me know what additional information I can provide.
The text was updated successfully, but these errors were encountered: