Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to start ssh bare metal machine. Running "minikube delete" may fix it: config: please provide an IP address #10574

Closed
nikhil-1995 opened this issue Feb 23, 2021 · 5 comments
Labels
co/generic-driver kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@nikhil-1995
Copy link

Steps to reproduce the issue:

Full output of failed command:

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

@afbjorklund
Copy link
Collaborator

Possible duplicate of #10516, but hard to tell because there is no information provided

@afbjorklund afbjorklund added kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. co/generic-driver labels Feb 23, 2021
@sharifelgamal
Copy link
Collaborator

Hey @nikhil-1995, I don’t yet have a clear way to replicate this issue. Do you mind adding some additional details. Here is additional information that would be helpful:

  • The exact minikube start command line used

  • The full output of the minikube start command, preferably with --alsologtostderr -v=4 for extra logging.

  • The full output of minikube logs

  • The full output of kubectl get po -A

@spowelljr
Copy link
Member

Hi @nikhil-1995, we haven't heard back from you, do you still have this issue?
There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but feel free to reopen when you feel ready to provide more information.

@brandonros
Copy link

qemu-system-aarch64 \
  -M virt \
  -m 2048M \
  -smp 2 \
  -cpu cortex-a57 \
  -kernel flatcar_production_pxe.vmlinuz \
  -initrd flatcar_production_pxe_image.cpio.gz \
  -append "flatcar.first_boot=1 sshkey=\"ssh-rsa REDACTED [email protected]\"" \
  -netdev user,id=mynet0,hostfwd=tcp::2222-:22 \
  -device e1000,netdev=mynet0 \
  -nographic
minikube start --driver=ssh --ssh-ip-address=127.0.0.1 --ssh-port=2222 --ssh-user=core --ssh-key='~/.ssh/id_rsa' --alsologtostderr --v=2
Brandons-MacBook-Air:flatcar brandonros 2022-03-27 17:52:40 $ minikube start --driver=ssh --ssh-ip-address=127.0.0.1 --ssh-port=2222 --ssh-user=core --ssh-key='~/.ssh/id_rsa' --alsologtostderr --v=2
I0327 17:52:46.015421   46453 out.go:297] Setting OutFile to fd 1 ...
I0327 17:52:46.015541   46453 out.go:349] isatty.IsTerminal(1) = true
I0327 17:52:46.015546   46453 out.go:310] Setting ErrFile to fd 2...
I0327 17:52:46.015551   46453 out.go:349] isatty.IsTerminal(2) = true
I0327 17:52:46.015631   46453 root.go:315] Updating PATH: /Users/brandonros/.minikube/bin
W0327 17:52:46.015900   46453 root.go:293] Error reading config file at /Users/brandonros/.minikube/config/config.json: open /Users/brandonros/.minikube/config/config.json: no such file or directory
I0327 17:52:46.016449   46453 out.go:304] Setting JSON to false
I0327 17:52:46.040908   46453 start.go:112] hostinfo: {"hostname":"Brandons-MacBook-Air.local","uptime":617759,"bootTime":1647800207,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.3","kernelVersion":"21.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"0e1716e5-b38b-5788-b067-7c2c4122bef6"}
W0327 17:52:46.041038   46453 start.go:120] gopshost.Virtualization returned error: not implemented yet
I0327 17:52:46.062299   46453 out.go:176] 😄  minikube v1.25.2 on Darwin 12.3 (arm64)
😄  minikube v1.25.2 on Darwin 12.3 (arm64)
I0327 17:52:46.062866   46453 notify.go:193] Checking for updates...
W0327 17:52:46.062935   46453 preload.go:295] Failed to list preload files: open /Users/brandonros/.minikube/cache/preloaded-tarball: no such file or directory
I0327 17:52:46.063464   46453 config.go:176] Loaded profile config "minikube": Driver=ssh, ContainerRuntime=crio, KubernetesVersion=v1.23.3
W0327 17:52:46.064535   46453 start.go:706] api.Load failed for minikube: filestore "minikube": Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0327 17:52:46.064987   46453 driver.go:344] Setting default libvirt URI to qemu:///system
W0327 17:52:46.065037   46453 start.go:706] api.Load failed for minikube: filestore "minikube": Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0327 17:52:46.101129   46453 out.go:176] ✨  Using the ssh driver based on existing profile
✨  Using the ssh driver based on existing profile
I0327 17:52:46.101152   46453 start.go:281] selected driver: ssh
I0327 17:52:46.101155   46453 start.go:798] validating driver "ssh" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress:127.0.0.1 SSHUser:core SSHKey:~/.ssh/id_rsa SSHPort:2222 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
I0327 17:52:46.101247   46453 start.go:809] status for ssh: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0327 17:52:46.101873   46453 cni.go:93] Creating CNI manager for ""
I0327 17:52:46.101892   46453 cni.go:163] "ssh" driver + crio runtime found, recommending bridge
I0327 17:52:46.101899   46453 start_flags.go:302] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress:127.0.0.1 SSHUser:core SSHKey:~/.ssh/id_rsa SSHPort:2222 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
I0327 17:52:46.140298   46453 out.go:176] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0327 17:52:46.140556   46453 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime crio
I0327 17:52:46.140921   46453 profile.go:148] Saving config to /Users/brandonros/.minikube/profiles/minikube/config.json ...
I0327 17:52:46.141387   46453 cache.go:107] acquiring lock: {Name:mk049b7610ad69aee18a76c2ef7a6fb788355dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 17:52:46.141437   46453 cache.go:107] acquiring lock: {Name:mka018dd765d91d0dc226c67657d3b3654bfecc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 17:52:46.141466   46453 cache.go:107] acquiring lock: {Name:mkbe44b448e5cc884dc9f8eebec7e4e18672ae5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 17:52:46.141513   46453 cache.go:115] /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-apiserver_v1.23.3 exists
I0327 17:52:46.141522   46453 cache.go:107] acquiring lock: {Name:mkd34c24745bfddc7b646048be97ca04dee103ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 17:52:46.141533   46453 cache.go:107] acquiring lock: {Name:mkd06538512677247fd668a3971fc57fb0f90e46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 17:52:46.141552   46453 cache.go:115] /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-controller-manager_v1.23.3 exists
I0327 17:52:46.142635   46453 cache.go:115] /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/etcd_3.5.1-0 exists
I0327 17:52:46.142646   46453 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/etcd_3.5.1-0" took 1.112791ms
I0327 17:52:46.141516   46453 cache.go:107] acquiring lock: {Name:mkefcd807802f3a3953ffeea73e357a7eb439d73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 17:52:46.142685   46453 cache.go:107] acquiring lock: {Name:mk2790a01865c9bf2499ec1971777d454bc53c3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 17:52:46.142699   46453 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.3" -> "/Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-apiserver_v1.23.3" took 190.417µs
I0327 17:52:46.142812   46453 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.3" -> "/Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-controller-manager_v1.23.3" took 133µs
I0327 17:52:46.142855   46453 cache.go:115] /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-proxy_v1.23.3 exists
I0327 17:52:46.142862   46453 cache.go:107] acquiring lock: {Name:mk3ff15aefab60b7f3b46e2de38f6fe8e5d851c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 17:52:46.142861   46453 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.3" -> "/Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-proxy_v1.23.3" took 1.304458ms
I0327 17:52:46.141525   46453 cache.go:115] /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
I0327 17:52:46.142920   46453 cache.go:115] /Users/brandonros/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0327 17:52:46.142923   46453 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/coredns/coredns_v1.8.6" took 1.511709ms
I0327 17:52:46.142925   46453 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/brandonros/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 65.875µs
I0327 17:52:46.143666   46453 cache.go:107] acquiring lock: {Name:mkc8f5ad63eaf19eed042407a0382d420dc97cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 17:52:46.143717   46453 cache.go:115] /Users/brandonros/.minikube/cache/images/arm64/docker.io/kubernetesui/dashboard_v2.3.1 exists
I0327 17:52:46.143723   46453 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/brandonros/.minikube/cache/images/arm64/docker.io/kubernetesui/dashboard_v2.3.1" took 60µs
I0327 17:52:46.152720   46453 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
I0327 17:52:46.141392   46453 cache.go:107] acquiring lock: {Name:mkad7a008055115664afa235bc26815369901f70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 17:52:46.153219   46453 cache.go:115] /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/pause_3.6 exists
I0327 17:52:46.153228   46453 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/pause_3.6" took 11.900334ms
I0327 17:52:46.153238   46453 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/pause_3.6 succeeded
I0327 17:52:46.152722   46453 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.3 -> /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-controller-manager_v1.23.3 succeeded
I0327 17:52:46.152726   46453 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/brandonros/.minikube/cache/images/arm64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
I0327 17:52:46.152752   46453 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/etcd_3.5.1-0 succeeded
I0327 17:52:46.152760   46453 cache.go:115] /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-scheduler_v1.23.3 exists
I0327 17:52:46.153386   46453 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.3" -> "/Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-scheduler_v1.23.3" took 11.888666ms
I0327 17:52:46.153391   46453 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.3 -> /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-scheduler_v1.23.3 succeeded
I0327 17:52:46.152789   46453 cache.go:115] /Users/brandonros/.minikube/cache/images/arm64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
I0327 17:52:46.153396   46453 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/brandonros/.minikube/cache/images/arm64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 12.069584ms
I0327 17:52:46.153401   46453 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/brandonros/.minikube/cache/images/arm64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
I0327 17:52:46.152964   46453 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.3 -> /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-apiserver_v1.23.3 succeeded
I0327 17:52:46.152963   46453 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.3 -> /Users/brandonros/.minikube/cache/images/arm64/k8s.gcr.io/kube-proxy_v1.23.3 succeeded
I0327 17:52:46.153255   46453 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/brandonros/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0327 17:52:46.153441   46453 cache.go:87] Successfully saved all images to host disk.
I0327 17:52:46.153477   46453 cache.go:208] Successfully downloaded all kic artifacts
I0327 17:52:46.153618   46453 start.go:313] acquiring machines lock for minikube: {Name:mk8c213c6346d0e843654524f8eb4dfd51167654 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 17:52:46.153665   46453 start.go:317] acquired machines lock for "minikube" in 37.459µs
I0327 17:52:46.153847   46453 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress:127.0.0.1 SSHUser:core SSHKey:~/.ssh/id_rsa SSHPort:2222 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:crio ControlPlane:true Worker:true}
I0327 17:52:46.153911   46453 start.go:126] createHost starting for "" (driver="ssh")
I0327 17:52:46.154164   46453 ssh_runner.go:195] Run: systemctl --version
I0327 17:52:46.154539   46453 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set
I0327 17:52:46.432037   46453 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set
I0327 17:52:46.973694   46453 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set
I0327 17:52:47.630332   46453 start.go:129] duration metric: createHost completed in 1.476373416s
I0327 17:52:47.630396   46453 start.go:80] releasing machines lock for "minikube", held for 1.476757167s
W0327 17:52:47.630468   46453 start.go:570] error starting host: config: please provide real IP address
I0327 17:52:47.630823   46453 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}}
W0327 17:52:48.022889   46453 cli_runner.go:180] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0327 17:52:48.022962   46453 delete.go:46] couldn't inspect container "minikube" before deleting: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:


stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0327 17:52:48.023061   46453 cli_runner.go:133] Run: podman container inspect minikube --format={{.State.Status}}
I0327 17:52:48.023079   46453 delete.go:46] couldn't inspect container "minikube" before deleting: unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exec: "podman": executable file not found in $PATH
stdout:

stderr:
W0327 17:52:48.023087   46453 start.go:575] delete host: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
W0327 17:52:48.023357   46453 out.go:241] 🤦  StartHost failed, but will try again: config: please provide real IP address
🤦  StartHost failed, but will try again: config: please provide real IP address
I0327 17:52:48.023365   46453 start.go:585] Will try again in 5 seconds ...
I0327 17:52:53.024440   46453 start.go:313] acquiring machines lock for minikube: {Name:mk8c213c6346d0e843654524f8eb4dfd51167654 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 17:52:53.024785   46453 start.go:317] acquired machines lock for "minikube" in 210.5µs
I0327 17:52:53.024819   46453 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress:127.0.0.1 SSHUser:core SSHKey:~/.ssh/id_rsa SSHPort:2222 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:crio ControlPlane:true Worker:true}
I0327 17:52:53.025090   46453 start.go:126] createHost starting for "" (driver="ssh")
I0327 17:52:53.025209   46453 start.go:129] duration metric: createHost completed in 86.833µs
I0327 17:52:53.025218   46453 start.go:80] releasing machines lock for "minikube", held for 417.958µs
W0327 17:52:53.025585   46453 out.go:241] 😿  Failed to start ssh bare metal machine. Running "minikube delete" may fix it: config: please provide real IP address
😿  Failed to start ssh bare metal machine. Running "minikube delete" may fix it: config: please provide real IP address
I0327 17:52:53.080987   46453 out.go:176] 

W0327 17:52:53.081389   46453 out.go:241] ❌  Exiting due to GUEST_PROVISION: Failed to start host: config: please provide real IP address
❌  Exiting due to GUEST_PROVISION: Failed to start host: config: please provide real IP address
W0327 17:52:53.081408   46453 out.go:241] 

W0327 17:52:53.083699   46453 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
I0327 17:52:53.172950   46453 out.go:176] 

@brandonros
Copy link

It just really hates localhost/127.0.0.1

I had to make a /etc/hosts entry. That made it go away, ended up with a different error about `OS type not recognized but that's not related to this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/generic-driver kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

5 participants