Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebooting the Minikube VM leads to docker daemon service config disappearing #5706

Closed
jgoeres opened this issue Oct 23, 2019 · 12 comments
Closed
Labels
kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@jgoeres
Copy link

jgoeres commented Oct 23, 2019

$ minikube version
minikube version: v1.4.0
commit: 7969c25a98a018b94ea87d949350f3271e9d64b6

I am starting Minikube with

minikube start --disk-size 30G --cpus 4 --insecure-registry : --memory 12g --kubernetes-version=v1.15.4

I chose Kubernetes 1.15.4 because the current released version of Helm doesn't work with 1.16 and Helm 2.15 has a nasty bug.

The cluster works fine until I restart the VM (e.g., after restarting my workstation). After that, kubectl cannot reach the cluster:

$ kubectl get pods -o wide
Unable to connect to the server: dial tcp 192.168.99.113:8443: connectex: No connection could 
be made because the target machine actively refused it.

Same for helm.
Checking inside the JVM shows me that the docker daemon isn't running:

$ minikube ssh
$ docker ps
docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon 
running?

Doing a

sudo systemctl status docker

gives

Unit docker.service could not be found.

So this sounds a bit like
#1811
although the details are different.

I can fix this by running "minikube start" with the exact same parameters as before (if I use no parameters, the first thing minikube does is update kubernetes, which will not work for me, as mentioned above.). The VM is preserved, as are my hostpath-based volumes.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Oct 23, 2019

This is a known bug, currently it is not possible to manage the VM outside of minikube start. :-(

You can set the preferred configuration with minikube config, to be able to run minikube start.

@tstromberg tstromberg added the kind/support Categorizes issue or PR as a support question. label Oct 23, 2019
@tstromberg
Copy link
Contributor

tstromberg commented Oct 23, 2019

Correct, you will need to run minikube start to start the appropriate services in the VM again. Do you mind confirming if this helps?

Much of this is due to the root filesystem of the VM being tmpfs, and thus wiped on reboot.

@tstromberg tstromberg added the triage/needs-information Indicates an issue needs more information in order to work on it. label Oct 23, 2019
@afbjorklund
Copy link
Collaborator

afbjorklund commented Oct 23, 2019

Actually, this is a design decision from that ticket being indirectly referenced above.
Rather than hard-coding something into the ISO image, it was generated at startup.

See 56e250e

It would be better if this could be stored in configuration (and eventually persisted),
rather being hidden away in some obscure systemd file that is problematic to change ?

@afbjorklund
Copy link
Collaborator

The auto-upgrade feature is the discussion topic of #2570

@jgoeres
Copy link
Author

jgoeres commented Oct 24, 2019

This is the first time I am really working with minikube, so I doubt I am the one to comment on design decisions - I can only communicate my expectations.
Having user Docker Toolbox for a while, I simply expected that after restarting my machine and firing up the VirtualBox VM, all services (Docker + the containers with the kubernetes services) come back up and I can continue working.
I also found the "lifecycle" of a minikube VM somewhat strange, in particular the fact that "minikube start" not only creates the VM if it doesn't exist, but as I now learned is also needed to set up the stuff inside it (start Docker daemon, K8s containers etc.).

I would have expected something more like this:

  • "minikube create" creates a new VM, allows me to configure what is inside the VM (K8s version, #cores etc.) and all the stuff inside is set to automatically restart on reboot (i.e., is a systemd service)
  • "minikube start" starts this VM (but NEVER changes anything). But effectively this should just be an alternative to restarting the VM via the hypervisor, which today, it is not
  • "minikube stop" stops it (as it does today)
  • "minikube delete" deletes it (as it does today)

In general, I am perfectly fine with the tmpfs approach - cattle vs. pets and all ;-)

But I think using "minikube config" to set the default settings of my minikube VM and then always running a simple "minikube start" (without having to specify the ton of params) would be an acceptable workaround. I say "would", because it seems that not all settings possible with minikube start have an equivalent setting in minikube config, or if they have one, it doesn't work.
I tried to replace

minikube start --disk-size 30G --cpus 4 --insecure-registry myinsecureregistry:12345 --memory 12g --kubernetes-version=v1.15.4 

with

minikube config set disk-size 30G 
minikube config set cpus 6 
minikube config set insecure-registry myinsecureregistry:12345 
minikube config set memory 12g 
minikube config set kubernetes-version v1.15.4 

Alas, both setting "memory" or the insecure-registry (for which there was a recent feature, and which is indeed listed as a property in "minikube config --help") don't work:

$ minikube config set memory 12G
*
X Set failed: [memory:strconv.Atoi: parsing "12G": invalid syntax]


$ minikube config set insecure-registry myinsecureregistry:12345
*
X Set failed: [Cannot enable/disable invalid addon insecure-registry]

@afbjorklund
Copy link
Collaborator

I also found the "lifecycle" of a minikube VM somewhat strange, in particular the fact that "minikube start" not only creates the VM if it doesn't exist, but as I now learned is also needed to set up the stuff inside it (start Docker daemon, K8s containers etc.).

That was the change in that other story, to make an optional configuration to not auto-create hosts.
However, users are now fairly used to minikube start creating everything so that will probably stay.

Since we are using libmachine, that will always make sure to install/start docker ("provisioning").
Then we have that other kubeadm init functionality, to install/start kubernetes ("bootstrapping").

@afbjorklund
Copy link
Collaborator

I say "would", because it seems that not all settings possible with minikube start have an equivalent setting in minikube config, or if they have one, it doesn't work.

Please open a new issue about that.

@jgoeres
Copy link
Author

jgoeres commented Oct 25, 2019

Meanwhile, I tried other parameters, and also ran into this: #5727

@priyawadhwa
Copy link

Hey @jgoeres you can find the reference for minikube config here.

Looks like you can't set insecure-registry, but I think memory is expected to be set in MB, so minikube config set memory 12,288 should work.

@tstromberg
Copy link
Contributor

The reboot part is annoying, but part of the design: host reboots don't automatically start minikube VM's. When the VM starts, it will be unhealthy until minikube start runs.

Moving the general part of cluster creation vs startup to #6097

@morhook
Copy link

morhook commented Jan 9, 2020

I've tried also to do

$ minikube config set insecure-registry 192.168.99.0/24

💣  Set failed: [Cannot enable/disable invalid addon insecure-registry]

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new/choose

@jessehu
Copy link

jessehu commented Feb 16, 2020

Do we consider suspending a VM when executing minikube stop? 'Suspend' stores the state of VM on disk. When a suspended VM is started, the original state is restored, then the k8s cluster and the resouces installed in the k8s cluster are still there.
KVM and VMware Workstation & Fusion support 'Suspend a VM'. See https://wiki.openstack.org/wiki/Kvm-Pause-Suspend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

6 participants