* * ==> Audit <== * |---------|-------------------------------------|----------|------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|-------------------------------------|----------|------|---------|-------------------------------|-------------------------------| | delete | -p arkade | arkade | alex | v1.21.0 | Wed, 17 Nov 2021 10:56:22 GMT | Wed, 17 Nov 2021 10:56:22 GMT | | start | --addons | arkade | alex | v1.21.0 | Wed, 17 Nov 2021 10:56:25 GMT | Wed, 17 Nov 2021 10:59:53 GMT | | | volumesnapshots,csi-hostpath-driver | | | | | | | | --apiserver-port=6443 | | | | | | | | --container-runtime=containerd | | | | | | | | --kubernetes-version=1.21.2 -p | | | | | | | | arkade | | | | | | | delete | | minikube | alex | v1.21.0 | Mon, 06 Dec 2021 09:37:10 GMT | Mon, 06 Dec 2021 09:37:10 GMT | | start | --addons | arkade | alex | v1.21.0 | Mon, 06 Dec 2021 09:37:16 GMT | Mon, 06 Dec 2021 09:39:38 GMT | | | volumesnapshots,csi-hostpath-driver | | | | | | | | --apiserver-port=6443 | | | | | | | | --container-runtime=containerd | | | | | | | | --kubernetes-version=1.21.2 -p | | | | | | | | arkade | | | | | | | --help | | minikube | alex | v1.21.0 | Mon, 06 Dec 2021 11:24:35 GMT | Mon, 06 Dec 2021 11:24:35 GMT | | profile | list | minikube | alex | v1.21.0 | Mon, 06 Dec 2021 11:24:46 GMT | Mon, 06 Dec 2021 11:24:47 GMT | | --help | | minikube | alex | v1.21.0 | Mon, 06 Dec 2021 11:24:49 GMT | Mon, 06 Dec 2021 11:24:49 GMT | | ssh | --help | minikube | alex | v1.21.0 | Mon, 06 Dec 2021 11:24:52 GMT | Mon, 06 Dec 2021 11:24:52 GMT | | --help | | minikube | alex | v1.21.0 | Mon, 06 Dec 2021 11:24:56 GMT | Mon, 06 Dec 2021 11:24:56 GMT | | --help | | minikube | alex | v1.21.0 | Mon, 06 Dec 2021 11:25:05 GMT | Mon, 06 Dec 2021 11:25:05 GMT | | ssh | --help | minikube | alex | v1.21.0 | Mon, 06 Dec 2021 11:25:07 GMT | Mon, 06 Dec 2021 11:25:07 GMT | | ssh | --profile arkade | arkade | alex | v1.21.0 | Mon, 06 Dec 2021 11:25:12 GMT | Mon, 06 Dec 2021 11:27:35 GMT | | delete | | minikube | alex | v1.21.0 | Mon, 06 Dec 2021 11:48:42 GMT | Mon, 06 Dec 2021 11:48:42 GMT | | delete | --profile arkade | arkade | alex | v1.21.0 | Mon, 06 Dec 2021 11:48:47 GMT | Mon, 06 Dec 2021 11:48:53 GMT | | start | --help | minikube | alex | v1.21.0 | Mon, 06 Dec 2021 11:49:02 GMT | Mon, 06 Dec 2021 11:49:02 GMT | | delete | -p arkade | arkade | alex | v1.21.0 | Mon, 06 Dec 2021 11:51:20 GMT | Mon, 06 Dec 2021 11:51:20 GMT | | start | --addons | arkade | alex | v1.21.0 | Mon, 06 Dec 2021 11:51:22 GMT | Mon, 06 Dec 2021 11:54:09 GMT | | | volumesnapshots,csi-hostpath-driver | | | | | | | | --apiserver-port=6443 | | | | | | | | --container-runtime=containerd | | | | | | | | --kubernetes-version=1.21.2 -p | | | | | | | | arkade --driver kvm2 | | | | | | |---------|-------------------------------------|----------|------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2021/12/06 11:51:22 Running on machine: alex-nuc8 Binary: Built with gc go1.16.4 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I1206 11:51:22.052597 1661470 out.go:291] Setting OutFile to fd 1 ... I1206 11:51:22.052685 1661470 out.go:343] isatty.IsTerminal(1) = true I1206 11:51:22.052688 1661470 out.go:304] Setting ErrFile to fd 2... I1206 11:51:22.052690 1661470 out.go:343] isatty.IsTerminal(2) = true I1206 11:51:22.052781 1661470 root.go:316] Updating PATH: /home/alex/.minikube/bin I1206 11:51:22.052958 1661470 out.go:298] Setting JSON to false I1206 11:51:22.071912 1661470 start.go:111] hostinfo: {"hostname":"alex-nuc8","uptime":331457,"bootTime":1638460025,"procs":640,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-91-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"201e00a1-4b9a-48fa-97a8-a9b6ed9a766b"} I1206 11:51:22.071973 1661470 start.go:121] virtualization: kvm host I1206 11:51:22.077765 1661470 out.go:170] 😄 [arkade] minikube v1.21.0 on Ubuntu 20.04 I1206 11:51:22.077907 1661470 driver.go:335] Setting default libvirt URI to qemu:///system I1206 11:51:22.105138 1661470 out.go:170] ✨ Using the kvm2 driver based on user configuration I1206 11:51:22.105160 1661470 start.go:279] selected driver: kvm2 I1206 11:51:22.105163 1661470 start.go:752] validating driver "kvm2" against I1206 11:51:22.105171 1661470 start.go:763] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc:} I1206 11:51:22.105212 1661470 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1206 11:51:22.105343 1661470 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/alex/.minikube/bin:/home/alex/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/alex/go/bin/:/usr/local/go/bin/:/home/alex/go/bin/:/usr/local/go/bin/:/home/alex/go/bin/:/usr/local/go/bin/ I1206 11:51:22.117784 1661470 install.go:137] /home/alex/.minikube/bin/docker-machine-driver-kvm2 version is 1.21.0 I1206 11:51:22.117820 1661470 start_flags.go:259] no existing cluster config was found, will generate one from the flags I1206 11:51:22.118318 1661470 start_flags.go:311] Using suggested 6000MB memory alloc based on sys=32035MB, container=0MB I1206 11:51:22.118396 1661470 start_flags.go:638] Wait components to verify : map[apiserver:true system_pods:true] I1206 11:51:22.118406 1661470 cni.go:93] Creating CNI manager for "" I1206 11:51:22.118413 1661470 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge I1206 11:51:22.118419 1661470 start_flags.go:268] Found "bridge CNI" CNI - setting NetworkPlugin=cni I1206 11:51:22.118423 1661470 start_flags.go:273] config: {Name:arkade KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:arkade Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:6443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I1206 11:51:22.118475 1661470 iso.go:123] acquiring lock: {Name:mkf66e835876f7c3623d6863ffd9bef217fdff54 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1206 11:51:22.121396 1661470 out.go:170] 👍 Starting control plane node arkade in cluster arkade I1206 11:51:22.121415 1661470 preload.go:110] Checking if preload exists for k8s version v1.21.2 and runtime containerd I1206 11:51:22.121431 1661470 preload.go:125] Found local preload: /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-amd64.tar.lz4 I1206 11:51:22.121438 1661470 cache.go:54] Caching tarball of preloaded images I1206 11:51:22.121499 1661470 preload.go:166] Found /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download I1206 11:51:22.121512 1661470 cache.go:57] Finished verifying existence of preloaded tar for v1.21.2 on containerd I1206 11:51:22.121701 1661470 profile.go:148] Saving config to /home/alex/.minikube/profiles/arkade/config.json ... I1206 11:51:22.121711 1661470 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/arkade/config.json: {Name:mk58aef825d4c35c2f50b5d445707ac4412244a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1206 11:51:22.121792 1661470 cache.go:202] Successfully downloaded all kic artifacts I1206 11:51:22.121807 1661470 start.go:313] acquiring machines lock for arkade: {Name:mk6fd3cb678c181e143bef0a872802b905379a4f Clock:{} Delay:500ms Timeout:13m0s Cancel:} I1206 11:51:22.121845 1661470 start.go:317] acquired machines lock for "arkade" in 30.996µs I1206 11:51:22.121859 1661470 start.go:89] Provisioning new machine with config: &{Name:arkade KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.21.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:arkade Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:6443 NodeName:} Nodes:[{Name: IP: Port:6443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:6443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true} I1206 11:51:22.121910 1661470 start.go:126] createHost starting for "" (driver="kvm2") I1206 11:51:22.126511 1661470 out.go:197] 🔥 Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ... I1206 11:51:22.126613 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:51:22.126637 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:51:22.140202 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:45125 I1206 11:51:22.140521 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:51:22.140916 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:51:22.140929 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:51:22.141155 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:51:22.141252 1661470 main.go:128] libmachine: (arkade) Calling .GetMachineName I1206 11:51:22.141331 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:51:22.141420 1661470 start.go:160] libmachine.API.Create for "arkade" (driver="kvm2") I1206 11:51:22.141442 1661470 client.go:168] LocalClient.Create starting I1206 11:51:22.141460 1661470 main.go:128] libmachine: Reading certificate data from /home/alex/.minikube/certs/ca.pem I1206 11:51:22.141484 1661470 main.go:128] libmachine: Decoding PEM data... I1206 11:51:22.141499 1661470 main.go:128] libmachine: Parsing certificate... I1206 11:51:22.141623 1661470 main.go:128] libmachine: Reading certificate data from /home/alex/.minikube/certs/cert.pem I1206 11:51:22.141638 1661470 main.go:128] libmachine: Decoding PEM data... I1206 11:51:22.141645 1661470 main.go:128] libmachine: Parsing certificate... I1206 11:51:22.141672 1661470 main.go:128] libmachine: Running pre-create checks... I1206 11:51:22.141681 1661470 main.go:128] libmachine: (arkade) Calling .PreCreateCheck I1206 11:51:22.141963 1661470 main.go:128] libmachine: (arkade) Calling .GetConfigRaw I1206 11:51:22.142262 1661470 main.go:128] libmachine: Creating machine... I1206 11:51:22.142268 1661470 main.go:128] libmachine: (arkade) Calling .Create I1206 11:51:22.142370 1661470 main.go:128] libmachine: (arkade) Creating KVM machine... I1206 11:51:22.142991 1661470 main.go:128] libmachine: (arkade) DBG | found existing default KVM network I1206 11:51:22.143058 1661470 main.go:128] libmachine: (arkade) DBG | found existing private KVM network mk-arkade I1206 11:51:22.143125 1661470 main.go:128] libmachine: (arkade) Setting up store path in /home/alex/.minikube/machines/arkade ... I1206 11:51:22.143131 1661470 main.go:128] libmachine: (arkade) Building disk image from file:///home/alex/.minikube/cache/iso/minikube-v1.21.0.iso I1206 11:51:22.143181 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:22.143119 1661492 common.go:101] Making disk image using store path: /home/alex/.minikube I1206 11:51:22.143231 1661470 main.go:128] libmachine: (arkade) Downloading /home/alex/.minikube/cache/boot2docker.iso from file:///home/alex/.minikube/cache/iso/minikube-v1.21.0.iso... I1206 11:51:22.266670 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:22.266564 1661492 common.go:108] Creating ssh key: /home/alex/.minikube/machines/arkade/id_rsa... I1206 11:51:22.666906 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:22.666822 1661492 common.go:114] Creating raw disk image: /home/alex/.minikube/machines/arkade/arkade.rawdisk... I1206 11:51:22.666924 1661470 main.go:128] libmachine: (arkade) DBG | Writing magic tar header I1206 11:51:22.666944 1661470 main.go:128] libmachine: (arkade) DBG | Writing SSH key tar header I1206 11:51:22.666954 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:22.666930 1661492 common.go:128] Fixing permissions on /home/alex/.minikube/machines/arkade ... I1206 11:51:22.667027 1661470 main.go:128] libmachine: (arkade) DBG | Checking permissions on dir: /home/alex/.minikube/machines/arkade I1206 11:51:22.667038 1661470 main.go:128] libmachine: (arkade) DBG | Checking permissions on dir: /home/alex/.minikube/machines I1206 11:51:22.667047 1661470 main.go:128] libmachine: (arkade) Setting executable bit set on /home/alex/.minikube/machines/arkade (perms=drwx------) I1206 11:51:22.667059 1661470 main.go:128] libmachine: (arkade) Setting executable bit set on /home/alex/.minikube/machines (perms=drwxr-xr-x) I1206 11:51:22.667066 1661470 main.go:128] libmachine: (arkade) Setting executable bit set on /home/alex/.minikube (perms=drwxr-xr-x) I1206 11:51:22.667070 1661470 main.go:128] libmachine: (arkade) DBG | Checking permissions on dir: /home/alex/.minikube I1206 11:51:22.667074 1661470 main.go:128] libmachine: (arkade) Setting executable bit set on /home/alex (perms=drwxr-xr-x) I1206 11:51:22.667078 1661470 main.go:128] libmachine: (arkade) DBG | Checking permissions on dir: /home/alex I1206 11:51:22.667088 1661470 main.go:128] libmachine: (arkade) DBG | Checking permissions on dir: /home I1206 11:51:22.667093 1661470 main.go:128] libmachine: (arkade) Creating domain... I1206 11:51:22.667097 1661470 main.go:128] libmachine: (arkade) DBG | Skipping /home - not owner I1206 11:51:22.676274 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:88:b2:12 in network default I1206 11:51:22.676599 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:22.676609 1661470 main.go:128] libmachine: (arkade) Ensuring networks are active... I1206 11:51:22.677093 1661470 main.go:128] libmachine: (arkade) Ensuring network default is active I1206 11:51:22.677293 1661470 main.go:128] libmachine: (arkade) Ensuring network mk-arkade is active I1206 11:51:22.677577 1661470 main.go:128] libmachine: (arkade) Getting domain xml... I1206 11:51:22.678140 1661470 main.go:128] libmachine: (arkade) Creating domain... I1206 11:51:24.104341 1661470 main.go:128] libmachine: (arkade) Waiting to get IP... I1206 11:51:24.104970 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:24.105194 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:24.105205 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:24.105179 1661492 retry.go:31] will retry after 263.082536ms: waiting for machine to come up I1206 11:51:24.369087 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:24.369302 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:24.369312 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:24.369280 1661492 retry.go:31] will retry after 381.329545ms: waiting for machine to come up I1206 11:51:24.752174 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:24.752440 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:24.752454 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:24.752427 1661492 retry.go:31] will retry after 422.765636ms: waiting for machine to come up I1206 11:51:25.176384 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:25.176622 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:25.176631 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:25.176559 1661492 retry.go:31] will retry after 473.074753ms: waiting for machine to come up I1206 11:51:25.650619 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:25.650985 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:25.651001 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:25.650926 1661492 retry.go:31] will retry after 587.352751ms: waiting for machine to come up I1206 11:51:26.239370 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:26.239650 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:26.239661 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:26.239612 1661492 retry.go:31] will retry after 834.206799ms: waiting for machine to come up I1206 11:51:27.075203 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:27.075523 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:27.075538 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:27.075499 1661492 retry.go:31] will retry after 746.553905ms: waiting for machine to come up I1206 11:51:27.823091 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:27.823322 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:27.823333 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:27.823297 1661492 retry.go:31] will retry after 987.362415ms: waiting for machine to come up I1206 11:51:28.812026 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:28.812350 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:28.812381 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:28.812337 1661492 retry.go:31] will retry after 1.189835008s: waiting for machine to come up I1206 11:51:30.003178 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:30.003526 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:30.003558 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:30.003475 1661492 retry.go:31] will retry after 1.677229867s: waiting for machine to come up I1206 11:51:31.684672 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:31.685713 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:31.685744 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:31.685642 1661492 retry.go:31] will retry after 2.346016261s: waiting for machine to come up I1206 11:51:34.034445 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:34.035727 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:34.035769 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:34.035603 1661492 retry.go:31] will retry after 3.36678925s: waiting for machine to come up I1206 11:51:37.405877 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:37.407091 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:37.407149 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:37.406981 1661492 retry.go:31] will retry after 3.11822781s: waiting for machine to come up I1206 11:51:40.530964 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:40.531763 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:40.531806 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:40.531697 1661492 retry.go:31] will retry after 4.276119362s: waiting for machine to come up I1206 11:51:44.812082 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:44.813118 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:44.813159 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:44.813007 1661492 retry.go:31] will retry after 5.167232101s: waiting for machine to come up I1206 11:51:49.983241 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:49.983625 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:49.983645 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:49.983594 1661492 retry.go:31] will retry after 6.994901864s: waiting for machine to come up I1206 11:51:56.984695 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:51:56.985616 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:51:56.985657 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:51:56.985505 1661492 retry.go:31] will retry after 7.91826225s: waiting for machine to come up I1206 11:52:04.904987 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:52:04.905185 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:04.905196 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:04.905171 1661492 retry.go:31] will retry after 9.953714808s: waiting for machine to come up I1206 11:52:14.862816 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:52:14.863827 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:14.863868 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:14.863726 1661492 retry.go:31] will retry after 15.120437328s: waiting for machine to come up I1206 11:52:29.987668 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:bd:c6:6c in network mk-arkade I1206 11:52:29.988344 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:29.988999 1661470 main.go:128] libmachine: (arkade) KVM machine creation complete! I1206 11:52:29.989064 1661470 client.go:171] LocalClient.Create took 1m7.847613477s I1206 11:52:31.989862 1661470 start.go:129] duration metric: createHost completed in 1m9.867938395s I1206 11:52:31.989871 1661470 start.go:80] releasing machines lock for "arkade", held for 1m9.868020461s W1206 11:52:31.989891 1661470 start.go:518] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine arkade didn't return IP after 1 minute I1206 11:52:31.990143 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:52:31.990164 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:52:32.003449 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:43465 I1206 11:52:32.003792 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:52:32.004051 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:52:32.004061 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:52:32.004252 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:52:32.004538 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:52:32.004555 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:52:32.016334 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:41089 I1206 11:52:32.016659 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:52:32.016958 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:52:32.016966 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:52:32.017155 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:52:32.017267 1661470 main.go:128] libmachine: (arkade) Calling .GetState I1206 11:52:32.018173 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:52:32.021241 1661470 out.go:170] 🔥 Deleting "arkade" in kvm2 ... I1206 11:52:32.021272 1661470 main.go:128] libmachine: (arkade) Calling .Remove I1206 11:52:32.021352 1661470 main.go:128] libmachine: (arkade) DBG | Removing machine... I1206 11:52:32.021882 1661470 main.go:128] libmachine: (arkade) DBG | Trying to delete the networks (if possible) I1206 11:52:32.022218 1661470 main.go:128] libmachine: (arkade) DBG | Checking if network mk-arkade exists... I1206 11:52:32.022289 1661470 main.go:128] libmachine: (arkade) DBG | Network mk-arkade exists I1206 11:52:32.022296 1661470 main.go:128] libmachine: (arkade) DBG | Trying to list all domains... I1206 11:52:32.022373 1661470 main.go:128] libmachine: (arkade) DBG | Listed all domains: total of 1 domains I1206 11:52:32.022381 1661470 main.go:128] libmachine: (arkade) DBG | Trying to get name of domain... I1206 11:52:32.022390 1661470 main.go:128] libmachine: (arkade) DBG | Got domain name: arkade I1206 11:52:32.022393 1661470 main.go:128] libmachine: (arkade) DBG | Skipping domain as it is us... I1206 11:52:32.022399 1661470 main.go:128] libmachine: (arkade) DBG | Trying to delete network mk-arkade... I1206 11:52:32.022448 1661470 main.go:128] libmachine: (arkade) DBG | Destroying active network mk-arkade I1206 11:52:32.278682 1661470 main.go:128] libmachine: (arkade) DBG | Undefining inactive network mk-arkade I1206 11:52:32.278961 1661470 main.go:128] libmachine: (arkade) DBG | Network mk-arkade deleted I1206 11:52:32.278972 1661470 main.go:128] libmachine: (arkade) DBG | Checking if the domain needs to be deleted I1206 11:52:32.278976 1661470 main.go:128] libmachine: (arkade) Successfully deleted networks I1206 11:52:32.279119 1661470 main.go:128] libmachine: (arkade) Domain arkade exists, removing... I1206 11:52:33.023351 1661470 main.go:128] libmachine: (arkade) Removing static IP address... I1206 11:52:33.023363 1661470 main.go:128] libmachine: (arkade) Removed static IP address I1206 11:52:33.023370 1661470 main.go:128] libmachine: (arkade) DBG | skip deleting static IP from network mk-arkade - couldn't find host DHCP lease matching {name: "", mac: "", ip: ""} W1206 11:52:33.269082 1661470 out.go:235] 🤦 StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine arkade didn't return IP after 1 minute I1206 11:52:33.269116 1661470 start.go:533] Will try again in 5 seconds ... I1206 11:52:38.269311 1661470 start.go:313] acquiring machines lock for arkade: {Name:mk6fd3cb678c181e143bef0a872802b905379a4f Clock:{} Delay:500ms Timeout:13m0s Cancel:} I1206 11:52:38.269417 1661470 start.go:317] acquired machines lock for "arkade" in 82.173µs I1206 11:52:38.269436 1661470 start.go:89] Provisioning new machine with config: &{Name:arkade KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.21.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:arkade Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:6443 NodeName:} Nodes:[{Name: IP: Port:6443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:6443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true} I1206 11:52:38.269530 1661470 start.go:126] createHost starting for "" (driver="kvm2") I1206 11:52:38.275482 1661470 out.go:197] 🔥 Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ... I1206 11:52:38.275619 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:52:38.275651 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:52:38.289844 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:37159 I1206 11:52:38.290140 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:52:38.290470 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:52:38.290480 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:52:38.290697 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:52:38.290792 1661470 main.go:128] libmachine: (arkade) Calling .GetMachineName I1206 11:52:38.290860 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:52:38.290947 1661470 start.go:160] libmachine.API.Create for "arkade" (driver="kvm2") I1206 11:52:38.290966 1661470 client.go:168] LocalClient.Create starting I1206 11:52:38.290980 1661470 main.go:128] libmachine: Reading certificate data from /home/alex/.minikube/certs/ca.pem I1206 11:52:38.290999 1661470 main.go:128] libmachine: Decoding PEM data... I1206 11:52:38.291012 1661470 main.go:128] libmachine: Parsing certificate... I1206 11:52:38.291087 1661470 main.go:128] libmachine: Reading certificate data from /home/alex/.minikube/certs/cert.pem I1206 11:52:38.291099 1661470 main.go:128] libmachine: Decoding PEM data... I1206 11:52:38.291106 1661470 main.go:128] libmachine: Parsing certificate... I1206 11:52:38.291136 1661470 main.go:128] libmachine: Running pre-create checks... I1206 11:52:38.291140 1661470 main.go:128] libmachine: (arkade) Calling .PreCreateCheck I1206 11:52:38.291219 1661470 main.go:128] libmachine: (arkade) Calling .GetConfigRaw I1206 11:52:38.291455 1661470 main.go:128] libmachine: Creating machine... I1206 11:52:38.291460 1661470 main.go:128] libmachine: (arkade) Calling .Create I1206 11:52:38.291528 1661470 main.go:128] libmachine: (arkade) Creating KVM machine... I1206 11:52:38.292182 1661470 main.go:128] libmachine: (arkade) DBG | found existing default KVM network I1206 11:52:38.293845 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:38.293745 1662479 network.go:215] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:81:94:51}} I1206 11:52:38.295230 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:38.295151 1662479 network.go:263] reserving subnet 192.168.50.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.50.0:0xc0000be810] misses:0} I1206 11:52:38.295255 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:38.295177 1662479 network.go:210] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I1206 11:52:38.300723 1661470 main.go:128] libmachine: (arkade) DBG | trying to create private KVM network mk-arkade 192.168.50.0/24... I1206 11:52:38.381454 1661470 main.go:128] libmachine: (arkade) Setting up store path in /home/alex/.minikube/machines/arkade ... I1206 11:52:38.381466 1661470 main.go:128] libmachine: (arkade) DBG | private KVM network mk-arkade 192.168.50.0/24 created I1206 11:52:38.381471 1661470 main.go:128] libmachine: (arkade) Building disk image from file:///home/alex/.minikube/cache/iso/minikube-v1.21.0.iso I1206 11:52:38.381481 1661470 main.go:128] libmachine: (arkade) Downloading /home/alex/.minikube/cache/boot2docker.iso from file:///home/alex/.minikube/cache/iso/minikube-v1.21.0.iso... I1206 11:52:38.381488 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:38.381352 1662479 common.go:101] Making disk image using store path: /home/alex/.minikube I1206 11:52:38.510439 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:38.510362 1662479 common.go:108] Creating ssh key: /home/alex/.minikube/machines/arkade/id_rsa... I1206 11:52:38.728279 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:38.728199 1662479 common.go:114] Creating raw disk image: /home/alex/.minikube/machines/arkade/arkade.rawdisk... I1206 11:52:38.728315 1661470 main.go:128] libmachine: (arkade) DBG | Writing magic tar header I1206 11:52:38.728334 1661470 main.go:128] libmachine: (arkade) DBG | Writing SSH key tar header I1206 11:52:38.728396 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:38.728329 1662479 common.go:128] Fixing permissions on /home/alex/.minikube/machines/arkade ... I1206 11:52:38.728442 1661470 main.go:128] libmachine: (arkade) DBG | Checking permissions on dir: /home/alex/.minikube/machines/arkade I1206 11:52:38.728455 1661470 main.go:128] libmachine: (arkade) Setting executable bit set on /home/alex/.minikube/machines/arkade (perms=drwx------) I1206 11:52:38.728464 1661470 main.go:128] libmachine: (arkade) DBG | Checking permissions on dir: /home/alex/.minikube/machines I1206 11:52:38.728475 1661470 main.go:128] libmachine: (arkade) DBG | Checking permissions on dir: /home/alex/.minikube I1206 11:52:38.728488 1661470 main.go:128] libmachine: (arkade) Setting executable bit set on /home/alex/.minikube/machines (perms=drwxr-xr-x) I1206 11:52:38.728497 1661470 main.go:128] libmachine: (arkade) Setting executable bit set on /home/alex/.minikube (perms=drwxr-xr-x) I1206 11:52:38.728507 1661470 main.go:128] libmachine: (arkade) Setting executable bit set on /home/alex (perms=drwxr-xr-x) I1206 11:52:38.728511 1661470 main.go:128] libmachine: (arkade) Creating domain... I1206 11:52:38.728522 1661470 main.go:128] libmachine: (arkade) DBG | Checking permissions on dir: /home/alex I1206 11:52:38.728528 1661470 main.go:128] libmachine: (arkade) DBG | Checking permissions on dir: /home I1206 11:52:38.728537 1661470 main.go:128] libmachine: (arkade) DBG | Skipping /home - not owner I1206 11:52:38.739281 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:92:cc:3c in network default I1206 11:52:38.739709 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:38.739721 1661470 main.go:128] libmachine: (arkade) Ensuring networks are active... I1206 11:52:38.740178 1661470 main.go:128] libmachine: (arkade) Ensuring network default is active I1206 11:52:38.740390 1661470 main.go:128] libmachine: (arkade) Ensuring network mk-arkade is active I1206 11:52:38.740701 1661470 main.go:128] libmachine: (arkade) Getting domain xml... I1206 11:52:38.741120 1661470 main.go:128] libmachine: (arkade) Creating domain... I1206 11:52:40.020852 1661470 main.go:128] libmachine: (arkade) Waiting to get IP... I1206 11:52:40.021445 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:40.021817 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:40.021842 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:40.021787 1662479 retry.go:31] will retry after 263.082536ms: waiting for machine to come up I1206 11:52:40.285790 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:40.286023 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:40.286040 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:40.285992 1662479 retry.go:31] will retry after 381.329545ms: waiting for machine to come up I1206 11:52:40.668914 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:40.669157 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:40.669169 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:40.669140 1662479 retry.go:31] will retry after 422.765636ms: waiting for machine to come up I1206 11:52:41.093546 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:41.093871 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:41.093888 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:41.093843 1662479 retry.go:31] will retry after 473.074753ms: waiting for machine to come up I1206 11:52:41.568239 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:41.568494 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:41.568505 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:41.568463 1662479 retry.go:31] will retry after 587.352751ms: waiting for machine to come up I1206 11:52:42.156900 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:42.157240 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:42.157254 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:42.157223 1662479 retry.go:31] will retry after 834.206799ms: waiting for machine to come up I1206 11:52:42.992967 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:42.993238 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:42.993272 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:42.993225 1662479 retry.go:31] will retry after 746.553905ms: waiting for machine to come up I1206 11:52:43.741273 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:43.741521 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:43.741532 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:43.741504 1662479 retry.go:31] will retry after 987.362415ms: waiting for machine to come up I1206 11:52:44.730278 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:44.730518 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:44.730533 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:44.730490 1662479 retry.go:31] will retry after 1.189835008s: waiting for machine to come up I1206 11:52:45.921477 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:45.921710 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:45.921721 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:45.921676 1662479 retry.go:31] will retry after 1.677229867s: waiting for machine to come up I1206 11:52:47.602023 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:47.603006 1661470 main.go:128] libmachine: (arkade) DBG | unable to find current IP address of domain arkade in network mk-arkade I1206 11:52:47.603044 1661470 main.go:128] libmachine: (arkade) DBG | I1206 11:52:47.602935 1662479 retry.go:31] will retry after 2.346016261s: waiting for machine to come up I1206 11:52:49.952428 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:49.953547 1661470 main.go:128] libmachine: (arkade) Found IP for machine: 192.168.50.52 I1206 11:52:49.953595 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has current primary IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:49.953612 1661470 main.go:128] libmachine: (arkade) Reserving static IP address... I1206 11:52:49.954610 1661470 main.go:128] libmachine: (arkade) DBG | unable to find host DHCP lease matching {name: "arkade", mac: "52:54:00:5a:72:5c", ip: "192.168.50.52"} in network mk-arkade I1206 11:52:50.127799 1661470 main.go:128] libmachine: (arkade) DBG | Getting to WaitForSSH function... I1206 11:52:50.127839 1661470 main.go:128] libmachine: (arkade) Reserved static IP address: 192.168.50.52 I1206 11:52:50.127925 1661470 main.go:128] libmachine: (arkade) Waiting for SSH to be available... I1206 11:52:50.133007 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.133815 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:72:5c} I1206 11:52:50.133881 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.134244 1661470 main.go:128] libmachine: (arkade) DBG | Using SSH client type: external I1206 11:52:50.134271 1661470 main.go:128] libmachine: (arkade) DBG | Using SSH private key: /home/alex/.minikube/machines/arkade/id_rsa (-rw-------) I1206 11:52:50.134324 1661470 main.go:128] libmachine: (arkade) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.52 -o IdentitiesOnly=yes -i /home/alex/.minikube/machines/arkade/id_rsa -p 22] /usr/bin/ssh } I1206 11:52:50.134418 1661470 main.go:128] libmachine: (arkade) DBG | About to run SSH command: I1206 11:52:50.134438 1661470 main.go:128] libmachine: (arkade) DBG | exit 0 I1206 11:52:50.283249 1661470 main.go:128] libmachine: (arkade) DBG | SSH cmd err, output: : I1206 11:52:50.284472 1661470 main.go:128] libmachine: (arkade) KVM machine creation complete! I1206 11:52:50.284544 1661470 main.go:128] libmachine: (arkade) Calling .GetConfigRaw I1206 11:52:50.285662 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:52:50.286084 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:52:50.286488 1661470 main.go:128] libmachine: Waiting for machine to be running, this may take a few minutes... I1206 11:52:50.286509 1661470 main.go:128] libmachine: (arkade) Calling .GetState I1206 11:52:50.290513 1661470 main.go:128] libmachine: Detecting operating system of created instance... I1206 11:52:50.290539 1661470 main.go:128] libmachine: Waiting for SSH to be available... I1206 11:52:50.290560 1661470 main.go:128] libmachine: Getting to WaitForSSH function... I1206 11:52:50.290584 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:52:50.296224 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.297059 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:50.297115 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.297465 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:52:50.297804 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:50.298159 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:50.298575 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:52:50.298984 1661470 main.go:128] libmachine: Using SSH client type: native I1206 11:52:50.299341 1661470 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 192.168.50.52 22 } I1206 11:52:50.299358 1661470 main.go:128] libmachine: About to run SSH command: exit 0 I1206 11:52:50.462067 1661470 main.go:128] libmachine: SSH cmd err, output: : I1206 11:52:50.462100 1661470 main.go:128] libmachine: Detecting the provisioner... I1206 11:52:50.462117 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:52:50.469241 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.470403 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:50.470452 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.470816 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:52:50.471250 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:50.471640 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:50.471970 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:52:50.472333 1661470 main.go:128] libmachine: Using SSH client type: native I1206 11:52:50.472703 1661470 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 192.168.50.52 22 } I1206 11:52:50.472723 1661470 main.go:128] libmachine: About to run SSH command: cat /etc/os-release I1206 11:52:50.621204 1661470 main.go:128] libmachine: SSH cmd err, output: : NAME=Buildroot VERSION=2020.02.12 ID=buildroot VERSION_ID=2020.02.12 PRETTY_NAME="Buildroot 2020.02.12" I1206 11:52:50.621288 1661470 main.go:128] libmachine: found compatible host: buildroot I1206 11:52:50.621299 1661470 main.go:128] libmachine: Provisioning with buildroot... I1206 11:52:50.621311 1661470 main.go:128] libmachine: (arkade) Calling .GetMachineName I1206 11:52:50.621625 1661470 buildroot.go:166] provisioning hostname "arkade" I1206 11:52:50.621643 1661470 main.go:128] libmachine: (arkade) Calling .GetMachineName I1206 11:52:50.621994 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:52:50.627078 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.627826 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:50.627860 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.628217 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:52:50.628484 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:50.628808 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:50.629042 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:52:50.629346 1661470 main.go:128] libmachine: Using SSH client type: native I1206 11:52:50.629618 1661470 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 192.168.50.52 22 } I1206 11:52:50.629636 1661470 main.go:128] libmachine: About to run SSH command: sudo hostname arkade && echo "arkade" | sudo tee /etc/hostname I1206 11:52:50.779118 1661470 main.go:128] libmachine: SSH cmd err, output: : arkade I1206 11:52:50.779151 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:52:50.787818 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.788854 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:50.788924 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.789386 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:52:50.789774 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:50.790151 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:50.790515 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:52:50.790914 1661470 main.go:128] libmachine: Using SSH client type: native I1206 11:52:50.791347 1661470 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 192.168.50.52 22 } I1206 11:52:50.791417 1661470 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\sarkade' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 arkade/g' /etc/hosts; else echo '127.0.1.1 arkade' | sudo tee -a /etc/hosts; fi fi I1206 11:52:50.932849 1661470 main.go:128] libmachine: SSH cmd err, output: : I1206 11:52:50.932907 1661470 buildroot.go:172] set auth options {CertDir:/home/alex/.minikube CaCertPath:/home/alex/.minikube/certs/ca.pem CaPrivateKeyPath:/home/alex/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/alex/.minikube/machines/server.pem ServerKeyPath:/home/alex/.minikube/machines/server-key.pem ClientKeyPath:/home/alex/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/alex/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/alex/.minikube} I1206 11:52:50.932995 1661470 buildroot.go:174] setting up certificates I1206 11:52:50.933016 1661470 provision.go:83] configureAuth start I1206 11:52:50.933038 1661470 main.go:128] libmachine: (arkade) Calling .GetMachineName I1206 11:52:50.933541 1661470 main.go:128] libmachine: (arkade) Calling .GetIP I1206 11:52:50.941718 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.942863 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:50.942934 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.943430 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:52:50.949932 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.950619 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:50.950659 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:50.951087 1661470 provision.go:137] copyHostCerts I1206 11:52:50.951197 1661470 exec_runner.go:145] found /home/alex/.minikube/ca.pem, removing ... I1206 11:52:50.951210 1661470 exec_runner.go:190] rm: /home/alex/.minikube/ca.pem I1206 11:52:50.951338 1661470 exec_runner.go:152] cp: /home/alex/.minikube/certs/ca.pem --> /home/alex/.minikube/ca.pem (1029 bytes) I1206 11:52:50.951582 1661470 exec_runner.go:145] found /home/alex/.minikube/cert.pem, removing ... I1206 11:52:50.951595 1661470 exec_runner.go:190] rm: /home/alex/.minikube/cert.pem I1206 11:52:50.951674 1661470 exec_runner.go:152] cp: /home/alex/.minikube/certs/cert.pem --> /home/alex/.minikube/cert.pem (1070 bytes) I1206 11:52:50.951832 1661470 exec_runner.go:145] found /home/alex/.minikube/key.pem, removing ... I1206 11:52:50.951847 1661470 exec_runner.go:190] rm: /home/alex/.minikube/key.pem I1206 11:52:50.951981 1661470 exec_runner.go:152] cp: /home/alex/.minikube/certs/key.pem --> /home/alex/.minikube/key.pem (1679 bytes) I1206 11:52:50.952129 1661470 provision.go:111] generating server cert: /home/alex/.minikube/machines/server.pem ca-key=/home/alex/.minikube/certs/ca.pem private-key=/home/alex/.minikube/certs/ca-key.pem org=alex.arkade san=[192.168.50.52 192.168.50.52 localhost 127.0.0.1 minikube arkade] I1206 11:52:51.193981 1661470 provision.go:171] copyRemoteCerts I1206 11:52:51.194013 1661470 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I1206 11:52:51.194025 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:52:51.195815 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.195972 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:51.195989 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.196055 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:52:51.196139 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:51.196210 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:52:51.196263 1661470 sshutil.go:53] new ssh client: &{IP:192.168.50.52 Port:22 SSHKeyPath:/home/alex/.minikube/machines/arkade/id_rsa Username:docker} I1206 11:52:51.284305 1661470 ssh_runner.go:316] scp /home/alex/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1029 bytes) I1206 11:52:51.323630 1661470 ssh_runner.go:316] scp /home/alex/.minikube/machines/server.pem --> /etc/docker/server.pem (1143 bytes) I1206 11:52:51.356281 1661470 ssh_runner.go:316] scp /home/alex/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I1206 11:52:51.378038 1661470 provision.go:86] duration metric: configureAuth took 445.009317ms I1206 11:52:51.378055 1661470 buildroot.go:189] setting minikube options for container-runtime I1206 11:52:51.378210 1661470 main.go:128] libmachine: Checking connection to Docker... I1206 11:52:51.378219 1661470 main.go:128] libmachine: (arkade) Calling .GetURL I1206 11:52:51.379254 1661470 main.go:128] libmachine: (arkade) DBG | Using libvirt version 6000000 I1206 11:52:51.381149 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.381452 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:51.381464 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.381596 1661470 main.go:128] libmachine: Docker is up and running! I1206 11:52:51.381603 1661470 main.go:128] libmachine: Reticulating splines... I1206 11:52:51.381606 1661470 client.go:171] LocalClient.Create took 13.090636591s I1206 11:52:51.381623 1661470 start.go:168] duration metric: libmachine.API.Create for "arkade" took 13.090670982s I1206 11:52:51.381628 1661470 start.go:267] post-start starting for "arkade" (driver="kvm2") I1206 11:52:51.381631 1661470 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I1206 11:52:51.381642 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:52:51.381784 1661470 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I1206 11:52:51.381797 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:52:51.383711 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.383979 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:51.383992 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.384069 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:52:51.384171 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:51.384252 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:52:51.384333 1661470 sshutil.go:53] new ssh client: &{IP:192.168.50.52 Port:22 SSHKeyPath:/home/alex/.minikube/machines/arkade/id_rsa Username:docker} I1206 11:52:51.474036 1661470 ssh_runner.go:149] Run: cat /etc/os-release I1206 11:52:51.483393 1661470 info.go:137] Remote host: Buildroot 2020.02.12 I1206 11:52:51.483416 1661470 filesync.go:126] Scanning /home/alex/.minikube/addons for local assets ... I1206 11:52:51.483519 1661470 filesync.go:126] Scanning /home/alex/.minikube/files for local assets ... I1206 11:52:51.483572 1661470 start.go:270] post-start completed in 101.935951ms I1206 11:52:51.483622 1661470 main.go:128] libmachine: (arkade) Calling .GetConfigRaw I1206 11:52:51.484805 1661470 main.go:128] libmachine: (arkade) Calling .GetIP I1206 11:52:51.491161 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.491918 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:51.491958 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.492513 1661470 profile.go:148] Saving config to /home/alex/.minikube/profiles/arkade/config.json ... I1206 11:52:51.492886 1661470 start.go:129] duration metric: createHost completed in 13.223342648s I1206 11:52:51.492901 1661470 start.go:80] releasing machines lock for "arkade", held for 13.223473029s I1206 11:52:51.492963 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:52:51.493383 1661470 main.go:128] libmachine: (arkade) Calling .GetIP I1206 11:52:51.500530 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.501465 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:51.501524 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.505274 1661470 out.go:170] 🌐 Found network options: I1206 11:52:51.508452 1661470 out.go:170] ▪ NO_PROXY=localhost,127.0.0.0/8,::1 W1206 11:52:51.508541 1661470 proxy.go:118] fail to check proxy env: Error ip not in block W1206 11:52:51.508592 1661470 proxy.go:118] fail to check proxy env: Error ip not in block W1206 11:52:51.508612 1661470 proxy.go:118] fail to check proxy env: Error ip not in block I1206 11:52:51.511554 1661470 out.go:170] ▪ no_proxy=localhost,127.0.0.0/8,::1 W1206 11:52:51.511652 1661470 proxy.go:118] fail to check proxy env: Error ip not in block W1206 11:52:51.511685 1661470 proxy.go:118] fail to check proxy env: Error ip not in block W1206 11:52:51.511710 1661470 proxy.go:118] fail to check proxy env: Error ip not in block I1206 11:52:51.511744 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:52:51.512339 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:52:51.513647 1661470 main.go:128] libmachine: (arkade) Calling .DriverName W1206 11:52:51.514305 1661470 proxy.go:118] fail to check proxy env: Error ip not in block W1206 11:52:51.514413 1661470 proxy.go:118] fail to check proxy env: Error ip not in block W1206 11:52:51.514451 1661470 proxy.go:118] fail to check proxy env: Error ip not in block I1206 11:52:51.514513 1661470 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I1206 11:52:51.514635 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:52:51.514642 1661470 ssh_runner.go:149] Run: systemctl --version I1206 11:52:51.514732 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:52:51.523134 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.523167 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.523926 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:51.523966 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.524164 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:52:51.524291 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:52:51.524620 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:51.524843 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:52:51.524918 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:52:51.525144 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:52:51.525305 1661470 sshutil.go:53] new ssh client: &{IP:192.168.50.52 Port:22 SSHKeyPath:/home/alex/.minikube/machines/arkade/id_rsa Username:docker} I1206 11:52:51.525545 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:52:51.525790 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:52:51.526135 1661470 sshutil.go:53] new ssh client: &{IP:192.168.50.52 Port:22 SSHKeyPath:/home/alex/.minikube/machines/arkade/id_rsa Username:docker} I1206 11:52:51.623803 1661470 preload.go:110] Checking if preload exists for k8s version v1.21.2 and runtime containerd I1206 11:52:51.623978 1661470 ssh_runner.go:149] Run: sudo crictl images --output json I1206 11:52:55.658801 1661470 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.034796372s) I1206 11:52:55.658945 1661470 containerd.go:573] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.2". assuming images are not preloaded. I1206 11:52:55.659016 1661470 ssh_runner.go:149] Run: which lz4 I1206 11:52:55.662890 1661470 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4 I1206 11:52:55.666668 1661470 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1 stdout: stderr: stat: cannot stat '/preloaded.tar.lz4': No such file or directory I1206 11:52:55.666686 1661470 ssh_runner.go:316] scp /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (967256487 bytes) I1206 11:52:57.462664 1661470 containerd.go:510] Took 1.799823 seconds to copy over tarball I1206 11:52:57.462721 1661470 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4 I1206 11:53:00.705642 1661470 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.242878357s) I1206 11:53:00.705651 1661470 containerd.go:517] Took 3.242975 seconds t extract the tarball I1206 11:53:00.705656 1661470 ssh_runner.go:100] rm: /preloaded.tar.lz4 I1206 11:53:00.757068 1661470 ssh_runner.go:149] Run: sudo systemctl daemon-reload I1206 11:53:00.851865 1661470 ssh_runner.go:149] Run: sudo systemctl restart containerd I1206 11:53:00.885732 1661470 ssh_runner.go:149] Run: sudo systemctl stop -f crio I1206 11:53:00.925269 1661470 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I1206 11:53:00.933164 1661470 docker.go:153] disabling docker service ... I1206 11:53:00.933200 1661470 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket I1206 11:53:00.941922 1661470 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service E1206 11:53:00.952183 1661470 docker.go:159] "Failed to stop" err="sudo systemctl stop -f docker.service: Process exited with status 5\nstdout:\n\nstderr:\nFailed to stop docker.service: Unit docker.service not loaded.\n" service="docker.service" W1206 11:53:00.952200 1661470 cruntime.go:236] disable failed: sudo systemctl stop -f docker.service: Process exited with status 5 stdout: stderr: Failed to stop docker.service: Unit docker.service not loaded. I1206 11:53:00.952239 1661470 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I1206 11:53:00.964165 1661470 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I1206 11:53:00.973786 1661470 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjQuMSIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIHN5c3RlbWRfY2dyb3VwID0gZmFsc2UKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBub19waXZvdCA9IHRydWUKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bnRpbWUudjEubGludXgiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBbcGx1Z2lucy5jcmkuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0LmQiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5saW51eF0KICAgIHNoaW0gPSAiY29udGFpbmVyZC1zaGltIgogICAgcnVudGltZSA9ICJydW5jIgogICAgcnVudGltZV9yb290ID0gIiIKICAgIG5vX3NoaW0gPSBmYWxzZQogICAgc2hpbV9kZWJ1ZyA9IGZhbHNlCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml" I1206 11:53:00.983192 1661470 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I1206 11:53:00.987342 1661470 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255 stdout: stderr: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory I1206 11:53:00.987370 1661470 ssh_runner.go:149] Run: sudo modprobe br_netfilter I1206 11:53:00.993940 1661470 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I1206 11:53:00.998372 1661470 ssh_runner.go:149] Run: sudo systemctl daemon-reload I1206 11:53:01.107399 1661470 ssh_runner.go:149] Run: sudo systemctl restart containerd I1206 11:53:02.103599 1661470 start.go:381] Will wait 60s for socket path /run/containerd/containerd.sock I1206 11:53:02.103649 1661470 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock I1206 11:53:02.107477 1661470 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1 stdout: stderr: stat: cannot stat '/run/containerd/containerd.sock': No such file or directory I1206 11:53:03.212455 1661470 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock I1206 11:53:03.215610 1661470 start.go:406] Will wait 60s for crictl version I1206 11:53:03.215639 1661470 ssh_runner.go:149] Run: sudo crictl version I1206 11:53:03.223799 1661470 start.go:415] Version: 0.1.0 RuntimeName: containerd RuntimeVersion: v1.4.4 RuntimeApiVersion: v1alpha2 I1206 11:53:03.223845 1661470 ssh_runner.go:149] Run: containerd --version I1206 11:53:03.243354 1661470 out.go:170] 📦 Preparing Kubernetes v1.21.2 on containerd 1.4.4 ... I1206 11:53:03.246083 1661470 out.go:170] ▪ env NO_PROXY=localhost,127.0.0.0/8,::1 I1206 11:53:03.246136 1661470 main.go:128] libmachine: (arkade) Calling .GetIP I1206 11:53:03.247883 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:53:03.248048 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:53:03.248063 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:53:03.248199 1661470 ssh_runner.go:149] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts I1206 11:53:03.250921 1661470 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I1206 11:53:03.256922 1661470 preload.go:110] Checking if preload exists for k8s version v1.21.2 and runtime containerd I1206 11:53:03.256963 1661470 ssh_runner.go:149] Run: sudo crictl images --output json I1206 11:53:03.268587 1661470 containerd.go:577] all images are preloaded for containerd runtime. I1206 11:53:03.268594 1661470 containerd.go:481] Images already preloaded, skipping extraction I1206 11:53:03.268624 1661470 ssh_runner.go:149] Run: sudo crictl images --output json I1206 11:53:03.278496 1661470 containerd.go:577] all images are preloaded for containerd runtime. I1206 11:53:03.278502 1661470 cache_images.go:74] Images are preloaded, skipping loading I1206 11:53:03.278535 1661470 ssh_runner.go:149] Run: sudo crictl info I1206 11:53:03.288586 1661470 cni.go:93] Creating CNI manager for "" I1206 11:53:03.288595 1661470 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge I1206 11:53:03.288599 1661470 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I1206 11:53:03.288606 1661470 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.52 APIServerPort:6443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:arkade NodeName:arkade DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I1206 11:53:03.288725 1661470 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.50.52 bindPort: 6443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /run/containerd/containerd.sock name: "arkade" kubeletExtraArgs: node-ip: 192.168.50.52 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.50.52"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:6443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.21.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 I1206 11:53:03.288776 1661470 kubeadm.go:909] kubelet [Unit] Wants=containerd.service [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=arkade --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.52 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.21.2 ClusterName:arkade Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:6443 NodeName:} I1206 11:53:03.288814 1661470 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2 I1206 11:53:03.293287 1661470 binaries.go:44] Found k8s binaries, skipping transfer I1206 11:53:03.293320 1661470 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I1206 11:53:03.297623 1661470 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (521 bytes) I1206 11:53:03.305014 1661470 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I1206 11:53:03.312315 1661470 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1875 bytes) I1206 11:53:03.318941 1661470 ssh_runner.go:149] Run: grep 192.168.50.52 control-plane.minikube.internal$ /etc/hosts I1206 11:53:03.321202 1661470 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.52 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I1206 11:53:03.326817 1661470 certs.go:52] Setting up /home/alex/.minikube/profiles/arkade for IP: 192.168.50.52 I1206 11:53:03.326878 1661470 certs.go:179] skipping minikubeCA CA generation: /home/alex/.minikube/ca.key I1206 11:53:03.326888 1661470 certs.go:179] skipping proxyClientCA CA generation: /home/alex/.minikube/proxy-client-ca.key I1206 11:53:03.326940 1661470 certs.go:294] generating minikube-user signed cert: /home/alex/.minikube/profiles/arkade/client.key I1206 11:53:03.326942 1661470 crypto.go:69] Generating cert /home/alex/.minikube/profiles/arkade/client.crt with IP's: [] I1206 11:53:03.542552 1661470 crypto.go:157] Writing cert to /home/alex/.minikube/profiles/arkade/client.crt ... I1206 11:53:03.542561 1661470 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/arkade/client.crt: {Name:mk7f8b6b827fe348090648ab448eb6f4304431cf Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1206 11:53:03.542728 1661470 crypto.go:165] Writing key to /home/alex/.minikube/profiles/arkade/client.key ... I1206 11:53:03.542732 1661470 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/arkade/client.key: {Name:mkd8c741c1d303c7b986bb62ce78ce871a0abf63 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1206 11:53:03.542796 1661470 certs.go:294] generating minikube signed cert: /home/alex/.minikube/profiles/arkade/apiserver.key.a1dd3d55 I1206 11:53:03.542799 1661470 crypto.go:69] Generating cert /home/alex/.minikube/profiles/arkade/apiserver.crt.a1dd3d55 with IP's: [192.168.50.52 10.96.0.1 127.0.0.1 10.0.0.1] I1206 11:53:03.838373 1661470 crypto.go:157] Writing cert to /home/alex/.minikube/profiles/arkade/apiserver.crt.a1dd3d55 ... I1206 11:53:03.838383 1661470 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/arkade/apiserver.crt.a1dd3d55: {Name:mk0e519da40172b75a078ad51939a6d39184862f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1206 11:53:03.838545 1661470 crypto.go:165] Writing key to /home/alex/.minikube/profiles/arkade/apiserver.key.a1dd3d55 ... I1206 11:53:03.838549 1661470 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/arkade/apiserver.key.a1dd3d55: {Name:mk22612c8bdf5f945fc66d0c4b0c9db173b3dffc Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1206 11:53:03.838605 1661470 certs.go:305] copying /home/alex/.minikube/profiles/arkade/apiserver.crt.a1dd3d55 -> /home/alex/.minikube/profiles/arkade/apiserver.crt I1206 11:53:03.838642 1661470 certs.go:309] copying /home/alex/.minikube/profiles/arkade/apiserver.key.a1dd3d55 -> /home/alex/.minikube/profiles/arkade/apiserver.key I1206 11:53:03.838673 1661470 certs.go:294] generating aggregator signed cert: /home/alex/.minikube/profiles/arkade/proxy-client.key I1206 11:53:03.838675 1661470 crypto.go:69] Generating cert /home/alex/.minikube/profiles/arkade/proxy-client.crt with IP's: [] I1206 11:53:03.973515 1661470 crypto.go:157] Writing cert to /home/alex/.minikube/profiles/arkade/proxy-client.crt ... I1206 11:53:03.973523 1661470 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/arkade/proxy-client.crt: {Name:mk1e5dfadd9026d794e3dea9dafd7c50a2a0de1b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1206 11:53:03.973693 1661470 crypto.go:165] Writing key to /home/alex/.minikube/profiles/arkade/proxy-client.key ... I1206 11:53:03.973697 1661470 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/arkade/proxy-client.key: {Name:mk18dcb2226485fdaec18fb690a894e34cd936c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1206 11:53:03.973839 1661470 certs.go:369] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/ca-key.pem (1679 bytes) I1206 11:53:03.973874 1661470 certs.go:369] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/ca.pem (1029 bytes) I1206 11:53:03.973890 1661470 certs.go:369] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/cert.pem (1070 bytes) I1206 11:53:03.973903 1661470 certs.go:369] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/key.pem (1679 bytes) I1206 11:53:03.974555 1661470 ssh_runner.go:316] scp /home/alex/.minikube/profiles/arkade/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I1206 11:53:03.985251 1661470 ssh_runner.go:316] scp /home/alex/.minikube/profiles/arkade/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I1206 11:53:03.994594 1661470 ssh_runner.go:316] scp /home/alex/.minikube/profiles/arkade/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I1206 11:53:04.004339 1661470 ssh_runner.go:316] scp /home/alex/.minikube/profiles/arkade/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I1206 11:53:04.013829 1661470 ssh_runner.go:316] scp /home/alex/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I1206 11:53:04.023541 1661470 ssh_runner.go:316] scp /home/alex/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I1206 11:53:04.033038 1661470 ssh_runner.go:316] scp /home/alex/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I1206 11:53:04.042030 1661470 ssh_runner.go:316] scp /home/alex/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I1206 11:53:04.050986 1661470 ssh_runner.go:316] scp /home/alex/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I1206 11:53:04.060381 1661470 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I1206 11:53:04.067256 1661470 ssh_runner.go:149] Run: openssl version I1206 11:53:04.070823 1661470 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I1206 11:53:04.075165 1661470 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I1206 11:53:04.077725 1661470 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Nov 17 10:58 /usr/share/ca-certificates/minikubeCA.pem I1206 11:53:04.077749 1661470 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I1206 11:53:04.081185 1661470 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I1206 11:53:04.085274 1661470 kubeadm.go:390] StartCluster: {Name:arkade KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.21.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:arkade Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:6443 NodeName:} Nodes:[{Name: IP:192.168.50.52 Port:6443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I1206 11:53:04.085326 1661470 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]} I1206 11:53:04.085356 1661470 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I1206 11:53:04.095557 1661470 cri.go:76] found id: "" I1206 11:53:04.095589 1661470 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I1206 11:53:04.100310 1661470 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I1206 11:53:04.105443 1661470 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I1206 11:53:04.110844 1661470 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I1206 11:53:04.110865 1661470 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I1206 11:53:04.301132 1661470 out.go:197] ▪ Generating certificates and keys ... I1206 11:53:06.705806 1661470 out.go:197] ▪ Booting up control plane ... I1206 11:53:20.262998 1661470 out.go:197] ▪ Configuring RBAC rules ... I1206 11:53:20.695449 1661470 cni.go:93] Creating CNI manager for "" I1206 11:53:20.695465 1661470 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge I1206 11:53:20.697782 1661470 out.go:170] 🔗 Configuring bridge CNI (Container Networking Interface) ... I1206 11:53:20.697915 1661470 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d I1206 11:53:20.706604 1661470 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I1206 11:53:20.726092 1661470 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I1206 11:53:20.726120 1661470 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I1206 11:53:20.726134 1661470 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl label nodes minikube.k8s.io/version=v1.21.0 minikube.k8s.io/commit=76d74191d82c47883dc7e1319ef7cebd3e00ee11 minikube.k8s.io/name=arkade minikube.k8s.io/updated_at=2021_12_06T11_53_20_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I1206 11:53:20.738481 1661470 ops.go:34] apiserver oom_adj: -16 I1206 11:53:20.841132 1661470 kubeadm.go:985] duration metric: took 115.041745ms to wait for elevateKubeSystemPrivileges. I1206 11:53:20.841168 1661470 kubeadm.go:392] StartCluster complete in 16.755896007s I1206 11:53:20.841178 1661470 settings.go:142] acquiring lock: {Name:mk627fa28a1976656e27a48af7f606caf0283542 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1206 11:53:20.841229 1661470 settings.go:150] Updating kubeconfig: /home/alex/.kube/config I1206 11:53:20.846102 1661470 lock.go:36] WriteFile acquiring /home/alex/.kube/config: {Name:mka8437642e3e79f288f89b7a0971396de857b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1206 11:53:21.380963 1661470 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "arkade" rescaled to 1 I1206 11:53:21.381071 1661470 start.go:214] Will wait 6m0s for node &{Name: IP:192.168.50.52 Port:6443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true} I1206 11:53:21.384179 1661470 out.go:170] 🔎 Verifying Kubernetes components... I1206 11:53:21.381318 1661470 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I1206 11:53:21.384303 1661470 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I1206 11:53:21.381307 1661470 addons.go:342] enableAddons start: toEnable=map[], additional=[volumesnapshots csi-hostpath-driver] I1206 11:53:21.384474 1661470 addons.go:59] Setting volumesnapshots=true in profile "arkade" I1206 11:53:21.384475 1661470 addons.go:59] Setting csi-hostpath-driver=true in profile "arkade" I1206 11:53:21.384490 1661470 addons.go:59] Setting default-storageclass=true in profile "arkade" I1206 11:53:21.384511 1661470 addons.go:135] Setting addon volumesnapshots=true in "arkade" I1206 11:53:21.384508 1661470 addons.go:59] Setting storage-provisioner=true in profile "arkade" I1206 11:53:21.384532 1661470 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "arkade" I1206 11:53:21.384551 1661470 addons.go:135] Setting addon storage-provisioner=true in "arkade" W1206 11:53:21.384569 1661470 addons.go:147] addon storage-provisioner should already be in state true I1206 11:53:21.384568 1661470 addons.go:135] Setting addon csi-hostpath-driver=true in "arkade" I1206 11:53:21.384574 1661470 host.go:66] Checking if "arkade" exists ... I1206 11:53:21.384630 1661470 host.go:66] Checking if "arkade" exists ... I1206 11:53:21.384637 1661470 host.go:66] Checking if "arkade" exists ... I1206 11:53:21.385678 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:53:21.385764 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:53:21.385839 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:53:21.385915 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:53:21.385954 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:53:21.386036 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:53:21.386188 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:53:21.386275 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:53:21.448030 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:34329 I1206 11:53:21.449485 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:53:21.450131 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:35939 I1206 11:53:21.451565 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:53:21.451708 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:53:21.451741 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:53:21.452417 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:53:21.452613 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:53:21.452651 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:53:21.453678 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:53:21.453735 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:53:21.455625 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:46751 I1206 11:53:21.455660 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:53:21.456500 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:53:21.456678 1661470 main.go:128] libmachine: (arkade) Calling .GetState I1206 11:53:21.457479 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:53:21.457508 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:53:21.457636 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:41485 I1206 11:53:21.458603 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:53:21.458715 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:53:21.460208 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:53:21.460268 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:53:21.460710 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:53:21.460731 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:53:21.461256 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:53:21.462214 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:53:21.462258 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:53:21.492144 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:39813 I1206 11:53:21.493757 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:53:21.494247 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:53:21.494261 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:53:21.494329 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:35163 I1206 11:53:21.494809 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:53:21.494888 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:53:21.495060 1661470 main.go:128] libmachine: (arkade) Calling .GetState I1206 11:53:21.495507 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:53:21.495522 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:53:21.495873 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:53:21.496063 1661470 main.go:128] libmachine: (arkade) Calling .GetState I1206 11:53:21.496361 1661470 addons.go:135] Setting addon default-storageclass=true in "arkade" W1206 11:53:21.496370 1661470 addons.go:147] addon default-storageclass should already be in state true I1206 11:53:21.496394 1661470 host.go:66] Checking if "arkade" exists ... I1206 11:53:21.496809 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:53:21.496848 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:53:21.497450 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:53:21.503772 1661470 out.go:170] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I1206 11:53:21.497724 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:53:21.503882 1661470 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml I1206 11:53:21.503890 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I1206 11:53:21.498077 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:38547 I1206 11:53:21.503907 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:53:21.510104 1661470 out.go:170] ▪ Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0 I1206 11:53:21.510170 1661470 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml I1206 11:53:21.510178 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes) I1206 11:53:21.510196 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:53:21.504609 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:53:21.506317 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:53:21.510363 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:53:21.510388 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:53:21.506832 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:53:21.510665 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:53:21.510773 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:53:21.510870 1661470 sshutil.go:53] new ssh client: &{IP:192.168.50.52 Port:22 SSHKeyPath:/home/alex/.minikube/machines/arkade/id_rsa Username:docker} I1206 11:53:21.513734 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:53:21.514242 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:53:21.514254 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:53:21.514421 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:53:21.514596 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:53:21.514708 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:53:21.514949 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:53:21.514965 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:53:21.515357 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:53:21.515431 1661470 sshutil.go:53] new ssh client: &{IP:192.168.50.52 Port:22 SSHKeyPath:/home/alex/.minikube/machines/arkade/id_rsa Username:docker} I1206 11:53:21.516201 1661470 main.go:128] libmachine: (arkade) Calling .GetState I1206 11:53:21.518656 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:53:21.522799 1661470 out.go:170] ▪ Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0 I1206 11:53:21.526522 1661470 out.go:170] ▪ Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1 I1206 11:53:21.529528 1661470 out.go:170] ▪ Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0 I1206 11:53:21.527695 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:44237 I1206 11:53:21.532329 1661470 out.go:170] ▪ Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0 I1206 11:53:21.530065 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:53:21.535149 1661470 out.go:170] ▪ Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0 I1206 11:53:21.532996 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:53:21.535204 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:53:21.538037 1661470 out.go:170] ▪ Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0 I1206 11:53:21.537654 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:53:21.541546 1661470 out.go:170] ▪ Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0 I1206 11:53:21.538618 1661470 main.go:128] libmachine: Found binary path at /home/alex/.minikube/bin/docker-machine-driver-kvm2 I1206 11:53:21.541608 1661470 main.go:128] libmachine: Launching plugin server for driver kvm2 I1206 11:53:21.547058 1661470 out.go:170] ▪ Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0 I1206 11:53:21.550555 1661470 out.go:170] ▪ Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0 I1206 11:53:21.550659 1661470 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml I1206 11:53:21.550668 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes) I1206 11:53:21.550686 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:53:21.554581 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:53:21.554581 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:53:21.554604 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:53:21.554636 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:53:21.554807 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:53:21.554913 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:53:21.555014 1661470 sshutil.go:53] new ssh client: &{IP:192.168.50.52 Port:22 SSHKeyPath:/home/alex/.minikube/machines/arkade/id_rsa Username:docker} I1206 11:53:21.569824 1661470 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:45711 I1206 11:53:21.570254 1661470 main.go:128] libmachine: () Calling .GetVersion I1206 11:53:21.570726 1661470 main.go:128] libmachine: Using API Version 1 I1206 11:53:21.570738 1661470 main.go:128] libmachine: () Calling .SetConfigRaw I1206 11:53:21.571286 1661470 main.go:128] libmachine: () Calling .GetMachineName I1206 11:53:21.571413 1661470 main.go:128] libmachine: (arkade) Calling .GetState I1206 11:53:21.572695 1661470 main.go:128] libmachine: (arkade) Calling .DriverName I1206 11:53:21.572861 1661470 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml I1206 11:53:21.572867 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I1206 11:53:21.572876 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHHostname I1206 11:53:21.575037 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:53:21.575449 1661470 main.go:128] libmachine: (arkade) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:72:5c", ip: ""} in network mk-arkade: {Iface:virbr2 ExpiryTime:2021-12-06 12:52:48 +0000 GMT Type:0 Mac:52:54:00:5a:72:5c Iaid: IPaddr:192.168.50.52 Prefix:24 Hostname:arkade Clientid:01:52:54:00:5a:72:5c} I1206 11:53:21.575466 1661470 main.go:128] libmachine: (arkade) DBG | domain arkade has defined IP address 192.168.50.52 and MAC address 52:54:00:5a:72:5c in network mk-arkade I1206 11:53:21.575711 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHPort I1206 11:53:21.575850 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHKeyPath I1206 11:53:21.575977 1661470 main.go:128] libmachine: (arkade) Calling .GetSSHUsername I1206 11:53:21.576076 1661470 sshutil.go:53] new ssh client: &{IP:192.168.50.52 Port:22 SSHKeyPath:/home/alex/.minikube/machines/arkade/id_rsa Username:docker} I1206 11:53:21.596508 1661470 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.50.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I1206 11:53:21.612723 1661470 api_server.go:50] waiting for apiserver process to appear ... I1206 11:53:21.612756 1661470 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I1206 11:53:21.623607 1661470 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I1206 11:53:21.655932 1661470 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml I1206 11:53:21.655940 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes) I1206 11:53:21.679096 1661470 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml I1206 11:53:21.679103 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes) I1206 11:53:21.730027 1661470 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml I1206 11:53:21.730041 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes) I1206 11:53:21.734496 1661470 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I1206 11:53:21.742410 1661470 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml I1206 11:53:21.742419 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes) I1206 11:53:21.770996 1661470 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml I1206 11:53:21.771005 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes) I1206 11:53:21.784052 1661470 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml I1206 11:53:21.784061 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes) I1206 11:53:21.796039 1661470 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml I1206 11:53:21.796045 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes) I1206 11:53:21.829348 1661470 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml I1206 11:53:21.862080 1661470 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml I1206 11:53:21.862088 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes) I1206 11:53:21.909356 1661470 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml I1206 11:53:21.909365 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes) I1206 11:53:21.952162 1661470 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml I1206 11:53:21.952173 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes) I1206 11:53:21.974469 1661470 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml I1206 11:53:21.974475 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes) I1206 11:53:21.989320 1661470 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml I1206 11:53:21.989330 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes) I1206 11:53:22.006630 1661470 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml I1206 11:53:22.006637 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes) I1206 11:53:22.026477 1661470 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml I1206 11:53:22.026484 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes) I1206 11:53:22.042233 1661470 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml I1206 11:53:22.042240 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes) I1206 11:53:22.071230 1661470 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml I1206 11:53:22.071250 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes) I1206 11:53:22.101640 1661470 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml I1206 11:53:22.101646 1661470 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes) I1206 11:53:22.130241 1661470 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml I1206 11:53:22.163405 1661470 start.go:725] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS I1206 11:53:22.163441 1661470 api_server.go:70] duration metric: took 782.317694ms to wait for apiserver process to appear ... I1206 11:53:22.163448 1661470 api_server.go:86] waiting for apiserver healthz status ... I1206 11:53:22.163455 1661470 api_server.go:223] Checking apiserver healthz at https://192.168.50.52:6443/healthz ... I1206 11:53:22.171166 1661470 api_server.go:249] https://192.168.50.52:6443/healthz returned 200: ok I1206 11:53:22.173603 1661470 api_server.go:139] control plane version: v1.21.2 I1206 11:53:22.173621 1661470 api_server.go:129] duration metric: took 10.161823ms to wait for apiserver health ... I1206 11:53:22.173628 1661470 system_pods.go:43] waiting for kube-system pods to appear ... I1206 11:53:22.180724 1661470 system_pods.go:59] 2 kube-system pods found I1206 11:53:22.180735 1661470 system_pods.go:61] "etcd-arkade" [b0f1a6e2-f7c5-4ffc-9943-a8a8a5629665] Pending I1206 11:53:22.180737 1661470 system_pods.go:61] "kube-apiserver-arkade" [44a1ce18-daaf-478f-bbd2-9367e6210d47] Pending I1206 11:53:22.180740 1661470 system_pods.go:74] duration metric: took 7.109373ms to wait for pod list to return data ... I1206 11:53:22.180746 1661470 kubeadm.go:547] duration metric: took 799.624668ms to wait for : map[apiserver:true system_pods:true] ... I1206 11:53:22.180753 1661470 node_conditions.go:102] verifying NodePressure condition ... I1206 11:53:22.185676 1661470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki I1206 11:53:22.185687 1661470 node_conditions.go:123] node cpu capacity is 2 I1206 11:53:22.185693 1661470 node_conditions.go:105] duration metric: took 4.937833ms to run NodePressure ... I1206 11:53:22.185699 1661470 start.go:219] waiting for startup goroutines ... I1206 11:53:22.239277 1661470 main.go:128] libmachine: Making call to close driver server I1206 11:53:22.239285 1661470 main.go:128] libmachine: (arkade) Calling .Close I1206 11:53:22.239428 1661470 main.go:128] libmachine: (arkade) DBG | Closing plugin on server side I1206 11:53:22.239463 1661470 main.go:128] libmachine: Successfully made call to close driver server I1206 11:53:22.239470 1661470 main.go:128] libmachine: Making call to close connection to plugin binary I1206 11:53:22.239480 1661470 main.go:128] libmachine: Making call to close driver server I1206 11:53:22.239486 1661470 main.go:128] libmachine: (arkade) Calling .Close I1206 11:53:22.239597 1661470 main.go:128] libmachine: Successfully made call to close driver server I1206 11:53:22.239604 1661470 main.go:128] libmachine: Making call to close connection to plugin binary I1206 11:53:22.239618 1661470 main.go:128] libmachine: (arkade) DBG | Closing plugin on server side I1206 11:53:22.276699 1661470 main.go:128] libmachine: Making call to close driver server I1206 11:53:22.276729 1661470 main.go:128] libmachine: (arkade) Calling .Close I1206 11:53:22.276884 1661470 main.go:128] libmachine: Successfully made call to close driver server I1206 11:53:22.276891 1661470 main.go:128] libmachine: Making call to close connection to plugin binary I1206 11:53:22.276896 1661470 main.go:128] libmachine: Making call to close driver server I1206 11:53:22.276901 1661470 main.go:128] libmachine: (arkade) Calling .Close I1206 11:53:22.277067 1661470 main.go:128] libmachine: Successfully made call to close driver server I1206 11:53:22.277079 1661470 main.go:128] libmachine: Making call to close connection to plugin binary I1206 11:53:22.277082 1661470 main.go:128] libmachine: (arkade) DBG | Closing plugin on server side I1206 11:53:22.277089 1661470 main.go:128] libmachine: Making call to close driver server I1206 11:53:22.277096 1661470 main.go:128] libmachine: (arkade) Calling .Close I1206 11:53:22.277197 1661470 main.go:128] libmachine: Successfully made call to close driver server I1206 11:53:22.277201 1661470 main.go:128] libmachine: Making call to close connection to plugin binary W1206 11:53:22.770231 1661470 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1 stdout: customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created serviceaccount/snapshot-controller created clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created deployment.apps/snapshot-controller created stderr: error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1" I1206 11:53:22.770240 1661470 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1 stdout: customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created serviceaccount/snapshot-controller created clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created deployment.apps/snapshot-controller created stderr: error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1" I1206 11:53:23.117106 1661470 main.go:128] libmachine: Making call to close driver server I1206 11:53:23.117117 1661470 main.go:128] libmachine: (arkade) Calling .Close I1206 11:53:23.117312 1661470 main.go:128] libmachine: (arkade) DBG | Closing plugin on server side I1206 11:53:23.117337 1661470 main.go:128] libmachine: Successfully made call to close driver server I1206 11:53:23.117344 1661470 main.go:128] libmachine: Making call to close connection to plugin binary I1206 11:53:23.117350 1661470 main.go:128] libmachine: Making call to close driver server I1206 11:53:23.117356 1661470 main.go:128] libmachine: (arkade) Calling .Close I1206 11:53:23.117484 1661470 main.go:128] libmachine: Successfully made call to close driver server I1206 11:53:23.117492 1661470 main.go:128] libmachine: Making call to close connection to plugin binary I1206 11:53:23.117493 1661470 main.go:128] libmachine: (arkade) DBG | Closing plugin on server side I1206 11:53:23.117497 1661470 addons.go:313] Verifying addon csi-hostpath-driver=true in "arkade" I1206 11:53:23.119565 1661470 out.go:170] 🔎 Verifying csi-hostpath-driver addon... I1206 11:53:23.130831 1661470 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml I1206 11:53:23.136330 1661470 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ... I1206 11:53:23.139622 1661470 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver I1206 11:53:25.507903 1661470 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.377052051s) I1206 11:53:25.507920 1661470 main.go:128] libmachine: Making call to close driver server I1206 11:53:25.507925 1661470 main.go:128] libmachine: (arkade) Calling .Close I1206 11:53:25.508106 1661470 main.go:128] libmachine: Successfully made call to close driver server I1206 11:53:25.508112 1661470 main.go:128] libmachine: Making call to close connection to plugin binary I1206 11:53:25.508118 1661470 main.go:128] libmachine: Making call to close driver server I1206 11:53:25.508123 1661470 main.go:128] libmachine: (arkade) Calling .Close I1206 11:53:25.509532 1661470 main.go:128] libmachine: (arkade) DBG | Closing plugin on server side I1206 11:53:25.509537 1661470 main.go:128] libmachine: Successfully made call to close driver server I1206 11:53:25.509547 1661470 main.go:128] libmachine: Making call to close connection to plugin binary I1206 11:53:33.643166 1661470 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver I1206 11:53:33.643172 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:34.156383 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:34.660952 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:35.154323 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:35.643953 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:36.143588 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:36.655236 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:37.143672 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:37.646151 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:38.143516 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:38.643276 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:39.149276 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:39.649937 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:40.149087 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:40.643860 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:41.144726 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:41.647687 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:42.144142 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:42.644271 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:43.152610 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:43.652621 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:44.145495 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:44.644055 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:45.143310 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:45.643997 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:46.158142 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:46.652540 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:47.144151 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:47.644064 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:48.143295 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:48.644208 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:49.144127 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:49.643304 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:50.145127 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:50.652460 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:51.146062 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:51.643653 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:52.144855 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:52.643752 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:53.144023 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:53.643905 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:54.144849 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:54.653010 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:55.143969 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:55.643134 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:56.144005 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:56.644516 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:57.152792 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:57.661727 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:58.167393 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:58.643685 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:59.143853 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:53:59.644161 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:00.148213 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:00.643858 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:01.155741 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:01.655994 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:02.143155 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:02.644322 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:03.143423 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:03.661848 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:04.159201 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:04.650090 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:05.143888 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:05.643981 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:06.147688 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:06.643781 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:07.153886 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:07.643412 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:08.144878 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:08.650054 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:09.153484 1661470 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [] I1206 11:54:09.646962 1661470 kapi.go:108] duration metric: took 46.510628503s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ... I1206 11:54:09.649890 1661470 out.go:170] 🌟 Enabled addons: storage-provisioner, default-storageclass, volumesnapshots, csi-hostpath-driver I1206 11:54:09.649927 1661470 addons.go:344] enableAddons completed in 48.268728782s I1206 11:54:09.713572 1661470 start.go:463] kubectl: 1.22.0, cluster: 1.21.2 (minor skew: 1) I1206 11:54:09.716465 1661470 out.go:170] 🏄 Done! kubectl is now configured to use "arkade" cluster and "default" namespace by default * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 98483ea8b4f68 619b7c61efe06 23 seconds ago Exited postgresql 5 f03bb5eca5eb2 4555fd29b89e8 77a8908a12e35 18 minutes ago Running liveness-probe 0 b861f04baf674 abd806c809716 b4d03a87a2f45 18 minutes ago Running hostpath 0 b861f04baf674 7342679ade04c 84b0f3f7f6f04 18 minutes ago Running node-driver-registrar 0 b861f04baf674 aba80c3423278 fa6785e2e7324 18 minutes ago Running csi-external-health-monitor-controller 0 b861f04baf674 0824a0ee2c939 a8fe79377034e 18 minutes ago Running csi-resizer 0 8c1f43e78dc7c 1b08a845c7cc0 f1d8a00ae690f 18 minutes ago Running volume-snapshot-controller 0 035b35cd84c3a 39203b714273e e0d187f105d60 18 minutes ago Running csi-provisioner 0 d5862517979ac e680e67aa1d34 03ce9595bf925 18 minutes ago Running csi-attacher 0 e3cb4c1903820 16d85f47542fa da32a49a903a6 18 minutes ago Running csi-snapshotter 0 df864120595ee b33ba834b33e4 f1d8a00ae690f 18 minutes ago Running volume-snapshot-controller 0 599490205ef59 a7d8b75b05377 223c6dea7afe5 18 minutes ago Running csi-external-health-monitor-agent 0 b861f04baf674 f2e42d4ca968a 6e38f40d628db 18 minutes ago Running storage-provisioner 0 b6ccbfcf9b0a4 0ad7579488ea8 296a6d5035e2d 18 minutes ago Running coredns 0 ff82e9a60e64a 4ab1147b826e4 a6ebd1c1ad981 18 minutes ago Running kube-proxy 0 6ec2576ea80eb add6ca4bb0f3a ae24db9aa2cc0 19 minutes ago Running kube-controller-manager 0 366c9c0f6a33b 1ff1c68368c6a 0369cf4303ffd 19 minutes ago Running etcd 0 04f185dad491e e76091cd30baf f917b8c8f55b7 19 minutes ago Running kube-scheduler 0 c354d92f5df00 5f4990e27f986 106ff58d43082 19 minutes ago Running kube-apiserver 0 e742ccd9de8a0 * * ==> containerd <== * -- Logs begin at Mon 2021-12-06 11:52:44 UTC, end at Mon 2021-12-06 12:12:21 UTC. -- Dec 06 12:09:00 arkade containerd[2161]: time="2021-12-06T12:09:00.208599454Z" level=info msg="StartContainer for \"068e8f2becc47ea08304408b02e6af9c6dbc79229bd980fbb299893a67a55538\" returns successfully" Dec 06 12:09:00 arkade containerd[2161]: time="2021-12-06T12:09:00.288460024Z" level=info msg="Finish piping stderr of container \"068e8f2becc47ea08304408b02e6af9c6dbc79229bd980fbb299893a67a55538\"" Dec 06 12:09:00 arkade containerd[2161]: time="2021-12-06T12:09:00.288779433Z" level=info msg="Finish piping stdout of container \"068e8f2becc47ea08304408b02e6af9c6dbc79229bd980fbb299893a67a55538\"" Dec 06 12:09:00 arkade containerd[2161]: time="2021-12-06T12:09:00.290127592Z" level=info msg="TaskExit event &TaskExit{ContainerID:068e8f2becc47ea08304408b02e6af9c6dbc79229bd980fbb299893a67a55538,ID:068e8f2becc47ea08304408b02e6af9c6dbc79229bd980fbb299893a67a55538,Pid:13716,ExitStatus:1,ExitedAt:2021-12-06 12:09:00.289941457 +0000 UTC,XXX_unrecognized:[],}" Dec 06 12:09:00 arkade containerd[2161]: time="2021-12-06T12:09:00.327765771Z" level=info msg="shim reaped" id=068e8f2becc47ea08304408b02e6af9c6dbc79229bd980fbb299893a67a55538 Dec 06 12:09:01 arkade containerd[2161]: time="2021-12-06T12:09:01.033038581Z" level=info msg="RemoveContainer for \"52d2951f43970a46513142c26d34dc384c5c9bf1994fcf0eaa08d39977966041\"" Dec 06 12:09:01 arkade containerd[2161]: time="2021-12-06T12:09:01.048179615Z" level=info msg="RemoveContainer for \"52d2951f43970a46513142c26d34dc384c5c9bf1994fcf0eaa08d39977966041\" returns successfully" Dec 06 12:09:20 arkade containerd[2161]: time="2021-12-06T12:09:20.929810220Z" level=info msg="CreateContainer within sandbox \"f03bb5eca5eb25ac22cf0b3e2df6e92ea9a92a96ea5b3142f7c896887648fdd0\" for container &ContainerMetadata{Name:postgresql,Attempt:2,}" Dec 06 12:09:20 arkade containerd[2161]: time="2021-12-06T12:09:20.991327330Z" level=info msg="CreateContainer within sandbox \"f03bb5eca5eb25ac22cf0b3e2df6e92ea9a92a96ea5b3142f7c896887648fdd0\" for &ContainerMetadata{Name:postgresql,Attempt:2,} returns container id \"ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4\"" Dec 06 12:09:20 arkade containerd[2161]: time="2021-12-06T12:09:20.996700781Z" level=info msg="StartContainer for \"ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4\"" Dec 06 12:09:20 arkade containerd[2161]: time="2021-12-06T12:09:20.997889809Z" level=warning msg="runtime v1 is deprecated since containerd v1.4, consider using runtime v2" Dec 06 12:09:20 arkade containerd[2161]: time="2021-12-06T12:09:20.999441392Z" level=info msg="shim containerd-shim started" address="unix:///run/containerd/s/5a06af7d29ee90e8e472c4634dc3c66065b953ca68bbb775f663b94cf7c3e34b" debug=false pid=13908 Dec 06 12:09:21 arkade containerd[2161]: time="2021-12-06T12:09:21.098416099Z" level=info msg="StartContainer for \"ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4\" returns successfully" Dec 06 12:09:21 arkade containerd[2161]: time="2021-12-06T12:09:21.183364545Z" level=info msg="TaskExit event &TaskExit{ContainerID:ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4,ID:ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4,Pid:13927,ExitStatus:1,ExitedAt:2021-12-06 12:09:21.183133275 +0000 UTC,XXX_unrecognized:[],}" Dec 06 12:09:21 arkade containerd[2161]: time="2021-12-06T12:09:21.183576683Z" level=info msg="Finish piping stderr of container \"ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4\"" Dec 06 12:09:21 arkade containerd[2161]: time="2021-12-06T12:09:21.183598049Z" level=info msg="Finish piping stdout of container \"ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4\"" Dec 06 12:09:21 arkade containerd[2161]: time="2021-12-06T12:09:21.214765695Z" level=info msg="shim reaped" id=ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4 Dec 06 12:09:22 arkade containerd[2161]: time="2021-12-06T12:09:22.118348844Z" level=info msg="RemoveContainer for \"068e8f2becc47ea08304408b02e6af9c6dbc79229bd980fbb299893a67a55538\"" Dec 06 12:09:22 arkade containerd[2161]: time="2021-12-06T12:09:22.137067906Z" level=info msg="RemoveContainer for \"068e8f2becc47ea08304408b02e6af9c6dbc79229bd980fbb299893a67a55538\" returns successfully" Dec 06 12:09:26 arkade containerd[2161]: time="2021-12-06T12:09:26.015984996Z" level=info msg="StopPodSandbox for \"890230d30c919187ad1966d12893765582ef90ed406c31f7179a4e890043604c\"" Dec 06 12:09:26 arkade containerd[2161]: time="2021-12-06T12:09:26.078343434Z" level=info msg="TearDown network for sandbox \"890230d30c919187ad1966d12893765582ef90ed406c31f7179a4e890043604c\" successfully" Dec 06 12:09:26 arkade containerd[2161]: time="2021-12-06T12:09:26.078449689Z" level=info msg="StopPodSandbox for \"890230d30c919187ad1966d12893765582ef90ed406c31f7179a4e890043604c\" returns successfully" Dec 06 12:09:26 arkade containerd[2161]: time="2021-12-06T12:09:26.078767343Z" level=info msg="RemovePodSandbox for \"890230d30c919187ad1966d12893765582ef90ed406c31f7179a4e890043604c\"" Dec 06 12:09:26 arkade containerd[2161]: time="2021-12-06T12:09:26.087642220Z" level=info msg="RemovePodSandbox \"890230d30c919187ad1966d12893765582ef90ed406c31f7179a4e890043604c\" returns successfully" Dec 06 12:09:41 arkade containerd[2161]: time="2021-12-06T12:09:41.938314479Z" level=info msg="CreateContainer within sandbox \"f03bb5eca5eb25ac22cf0b3e2df6e92ea9a92a96ea5b3142f7c896887648fdd0\" for container &ContainerMetadata{Name:postgresql,Attempt:3,}" Dec 06 12:09:42 arkade containerd[2161]: time="2021-12-06T12:09:42.014207397Z" level=info msg="CreateContainer within sandbox \"f03bb5eca5eb25ac22cf0b3e2df6e92ea9a92a96ea5b3142f7c896887648fdd0\" for &ContainerMetadata{Name:postgresql,Attempt:3,} returns container id \"ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce\"" Dec 06 12:09:42 arkade containerd[2161]: time="2021-12-06T12:09:42.014709164Z" level=info msg="StartContainer for \"ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce\"" Dec 06 12:09:42 arkade containerd[2161]: time="2021-12-06T12:09:42.015100443Z" level=warning msg="runtime v1 is deprecated since containerd v1.4, consider using runtime v2" Dec 06 12:09:42 arkade containerd[2161]: time="2021-12-06T12:09:42.015729081Z" level=info msg="shim containerd-shim started" address="unix:///run/containerd/s/ef9a0617cac7cd06a4fb9ea036caa06065aa2653693ff49d6daecf8c06ab9330" debug=false pid=14149 Dec 06 12:09:42 arkade containerd[2161]: time="2021-12-06T12:09:42.120856995Z" level=info msg="StartContainer for \"ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce\" returns successfully" Dec 06 12:09:42 arkade containerd[2161]: time="2021-12-06T12:09:42.185140278Z" level=info msg="Finish piping stderr of container \"ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce\"" Dec 06 12:09:42 arkade containerd[2161]: time="2021-12-06T12:09:42.185191484Z" level=info msg="Finish piping stdout of container \"ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce\"" Dec 06 12:09:42 arkade containerd[2161]: time="2021-12-06T12:09:42.186580485Z" level=info msg="TaskExit event &TaskExit{ContainerID:ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce,ID:ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce,Pid:14168,ExitStatus:1,ExitedAt:2021-12-06 12:09:42.186345102 +0000 UTC,XXX_unrecognized:[],}" Dec 06 12:09:42 arkade containerd[2161]: time="2021-12-06T12:09:42.230981283Z" level=info msg="shim reaped" id=ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce Dec 06 12:09:43 arkade containerd[2161]: time="2021-12-06T12:09:43.209620976Z" level=info msg="RemoveContainer for \"ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4\"" Dec 06 12:09:43 arkade containerd[2161]: time="2021-12-06T12:09:43.228118323Z" level=info msg="RemoveContainer for \"ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4\" returns successfully" Dec 06 12:10:25 arkade containerd[2161]: time="2021-12-06T12:10:25.954977164Z" level=info msg="CreateContainer within sandbox \"f03bb5eca5eb25ac22cf0b3e2df6e92ea9a92a96ea5b3142f7c896887648fdd0\" for container &ContainerMetadata{Name:postgresql,Attempt:4,}" Dec 06 12:10:26 arkade containerd[2161]: time="2021-12-06T12:10:26.010967490Z" level=info msg="CreateContainer within sandbox \"f03bb5eca5eb25ac22cf0b3e2df6e92ea9a92a96ea5b3142f7c896887648fdd0\" for &ContainerMetadata{Name:postgresql,Attempt:4,} returns container id \"c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c\"" Dec 06 12:10:26 arkade containerd[2161]: time="2021-12-06T12:10:26.017681784Z" level=info msg="StartContainer for \"c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c\"" Dec 06 12:10:26 arkade containerd[2161]: time="2021-12-06T12:10:26.018037404Z" level=warning msg="runtime v1 is deprecated since containerd v1.4, consider using runtime v2" Dec 06 12:10:26 arkade containerd[2161]: time="2021-12-06T12:10:26.018551211Z" level=info msg="shim containerd-shim started" address="unix:///run/containerd/s/6cfaf5ec3485c3a6d5b88d822bb915589149f149e4a88559db1d23ac7ab0231d" debug=false pid=14497 Dec 06 12:10:26 arkade containerd[2161]: time="2021-12-06T12:10:26.089719414Z" level=info msg="RemoveContainer for \"ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce\"" Dec 06 12:10:26 arkade containerd[2161]: time="2021-12-06T12:10:26.099133124Z" level=info msg="RemoveContainer for \"ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce\" returns successfully" Dec 06 12:10:26 arkade containerd[2161]: time="2021-12-06T12:10:26.122248797Z" level=info msg="StartContainer for \"c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c\" returns successfully" Dec 06 12:10:26 arkade containerd[2161]: time="2021-12-06T12:10:26.203532622Z" level=info msg="Finish piping stderr of container \"c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c\"" Dec 06 12:10:26 arkade containerd[2161]: time="2021-12-06T12:10:26.204762898Z" level=info msg="Finish piping stdout of container \"c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c\"" Dec 06 12:10:26 arkade containerd[2161]: time="2021-12-06T12:10:26.204841170Z" level=info msg="TaskExit event &TaskExit{ContainerID:c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c,ID:c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c,Pid:14513,ExitStatus:1,ExitedAt:2021-12-06 12:10:26.204615927 +0000 UTC,XXX_unrecognized:[],}" Dec 06 12:10:26 arkade containerd[2161]: time="2021-12-06T12:10:26.236355564Z" level=info msg="shim reaped" id=c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c Dec 06 12:11:57 arkade containerd[2161]: time="2021-12-06T12:11:57.932799839Z" level=info msg="CreateContainer within sandbox \"f03bb5eca5eb25ac22cf0b3e2df6e92ea9a92a96ea5b3142f7c896887648fdd0\" for container &ContainerMetadata{Name:postgresql,Attempt:5,}" Dec 06 12:11:57 arkade containerd[2161]: time="2021-12-06T12:11:57.991345714Z" level=info msg="CreateContainer within sandbox \"f03bb5eca5eb25ac22cf0b3e2df6e92ea9a92a96ea5b3142f7c896887648fdd0\" for &ContainerMetadata{Name:postgresql,Attempt:5,} returns container id \"98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021\"" Dec 06 12:11:57 arkade containerd[2161]: time="2021-12-06T12:11:57.992114985Z" level=info msg="StartContainer for \"98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021\"" Dec 06 12:11:57 arkade containerd[2161]: time="2021-12-06T12:11:57.992532221Z" level=warning msg="runtime v1 is deprecated since containerd v1.4, consider using runtime v2" Dec 06 12:11:57 arkade containerd[2161]: time="2021-12-06T12:11:57.994289833Z" level=info msg="shim containerd-shim started" address="unix:///run/containerd/s/92aad672ddb1287d6eb021c41fd6d5b16425e440d101782dab736a1962d30770" debug=false pid=15182 Dec 06 12:11:58 arkade containerd[2161]: time="2021-12-06T12:11:58.090679169Z" level=info msg="StartContainer for \"98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021\" returns successfully" Dec 06 12:11:58 arkade containerd[2161]: time="2021-12-06T12:11:58.153526565Z" level=info msg="Finish piping stderr of container \"98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021\"" Dec 06 12:11:58 arkade containerd[2161]: time="2021-12-06T12:11:58.153535070Z" level=info msg="Finish piping stdout of container \"98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021\"" Dec 06 12:11:58 arkade containerd[2161]: time="2021-12-06T12:11:58.154825768Z" level=info msg="TaskExit event &TaskExit{ContainerID:98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021,ID:98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021,Pid:15199,ExitStatus:1,ExitedAt:2021-12-06 12:11:58.154709629 +0000 UTC,XXX_unrecognized:[],}" Dec 06 12:11:58 arkade containerd[2161]: time="2021-12-06T12:11:58.187534404Z" level=info msg="shim reaped" id=98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021 Dec 06 12:11:58 arkade containerd[2161]: time="2021-12-06T12:11:58.645170778Z" level=info msg="RemoveContainer for \"c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c\"" Dec 06 12:11:58 arkade containerd[2161]: time="2021-12-06T12:11:58.660626827Z" level=info msg="RemoveContainer for \"c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c\" returns successfully" * * ==> coredns [0ad7579488ea8262a06a7fa92dbef42b8c2789489acde835179164e8e227ca8b] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b CoreDNS-1.8.0 linux/amd64, go1.15.3, 054c9ae * * ==> describe nodes <== * Name: arkade Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=arkade kubernetes.io/os=linux minikube.k8s.io/commit=76d74191d82c47883dc7e1319ef7cebd3e00ee11 minikube.k8s.io/name=arkade minikube.k8s.io/updated_at=2021_12_06T11_53_20_0700 minikube.k8s.io/version=v1.21.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= topology.hostpath.csi/node=arkade Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"arkade"} kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 06 Dec 2021 11:53:17 +0000 Taints: Unschedulable: false Lease: HolderIdentity: arkade AcquireTime: RenewTime: Mon, 06 Dec 2021 12:12:20 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 06 Dec 2021 12:08:58 +0000 Mon, 06 Dec 2021 11:53:14 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 06 Dec 2021 12:08:58 +0000 Mon, 06 Dec 2021 11:53:14 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 06 Dec 2021 12:08:58 +0000 Mon, 06 Dec 2021 11:53:14 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 06 Dec 2021 12:08:58 +0000 Mon, 06 Dec 2021 11:53:33 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.50.52 Hostname: arkade Capacity: cpu: 2 ephemeral-storage: 17784752Ki hugepages-2Mi: 0 memory: 5952312Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 17784752Ki hugepages-2Mi: 0 memory: 5952312Ki pods: 110 System Info: Machine ID: e28f4b111a59445c9b0e4aa6acad368a System UUID: e28f4b11-1a59-445c-9b0e-4aa6acad368a Boot ID: 49afb4a8-3ea5-46ee-b178-074269285f4c Kernel Version: 4.19.182 OS Image: Buildroot 2020.02.12 Operating System: linux Architecture: amd64 Container Runtime Version: containerd://1.4.4 Kubelet Version: v1.21.2 Kube-Proxy Version: v1.21.2 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (15 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default postgresql-postgresql-0 250m (12%!)(MISSING) 0 (0%!)(MISSING) 256Mi (4%!)(MISSING) 0 (0%!)(MISSING) 3m26s kube-system coredns-558bd4d5db-mlvxw 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (2%!)(MISSING) 18m kube-system csi-hostpath-attacher-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system csi-hostpath-provisioner-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system csi-hostpath-resizer-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system csi-hostpath-snapshotter-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system csi-hostpathplugin-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system etcd-arkade 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 19m kube-system kube-apiserver-arkade 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 19m kube-system kube-controller-manager-arkade 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system kube-proxy-wxknx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system kube-scheduler-arkade 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system snapshot-controller-989f9ddc8-qmvvs 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system snapshot-controller-989f9ddc8-vxszk 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1 (50%!)(MISSING) 0 (0%!)(MISSING) memory 426Mi (7%!)(MISSING) 170Mi (2%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 19m (x5 over 19m) kubelet Node arkade status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 19m (x4 over 19m) kubelet Node arkade status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 19m (x4 over 19m) kubelet Node arkade status is now: NodeHasSufficientPID Normal Starting 18m kubelet Starting kubelet. Normal NodeHasSufficientMemory 18m kubelet Node arkade status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 18m kubelet Node arkade status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 18m kubelet Node arkade status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 18m kubelet Updated Node Allocatable limit across pods Normal NodeReady 18m kubelet Node arkade status is now: NodeReady Normal Starting 18m kube-proxy Starting kube-proxy. * * ==> dmesg <== * [Dec 6 11:52] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.080122] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ +2.992470] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +0.961571] systemd-fstab-generator[1159]: Ignoring "noauto" for root device [ +0.027160] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. [ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.) [ +0.552547] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1719 comm=systemd-network [ +0.711145] vboxguest: loading out-of-tree module taints kernel. [ +0.003227] vboxguest: PCI device not found, probably running on physical hardware. [ +2.414806] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [Dec 6 11:53] systemd-fstab-generator[2101]: Ignoring "noauto" for root device [ +0.240063] systemd-fstab-generator[2151]: Ignoring "noauto" for root device [ +5.609101] systemd-fstab-generator[2329]: Ignoring "noauto" for root device [ +13.726433] systemd-fstab-generator[2694]: Ignoring "noauto" for root device [ +15.169168] kauditd_printk_skb: 38 callbacks suppressed [ +11.601252] kauditd_printk_skb: 170 callbacks suppressed [ +7.037931] kauditd_printk_skb: 2 callbacks suppressed [Dec 6 11:54] kauditd_printk_skb: 2 callbacks suppressed [ +41.008857] NFSD: Unable to end grace period: -110 [Dec 6 11:59] kauditd_printk_skb: 17 callbacks suppressed [Dec 6 12:08] kauditd_printk_skb: 50 callbacks suppressed [Dec 6 12:09] kauditd_printk_skb: 23 callbacks suppressed * * ==> etcd [1ff1c68368c6a844a7235b8a8d791b05e7667c38e9209070dd9c46635c32c646] <== * 2021-12-06 12:03:05.026539 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:03:14.623107 I | mvcc: store.index: compact 1002 2021-12-06 12:03:14.641500 I | mvcc: finished scheduled compaction at 1002 (took 17.953344ms) 2021-12-06 12:03:15.032536 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:03:25.026799 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:03:35.028891 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:03:45.027942 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:03:55.029173 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:04:05.029663 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:04:15.026824 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:04:25.026905 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:04:35.026950 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:04:45.027420 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:04:55.028058 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:05:05.028005 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:05:15.028043 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:05:25.026557 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:05:35.028589 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:05:45.027096 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:05:55.027921 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:06:05.028679 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:06:15.027579 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:06:25.026961 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:06:35.026491 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:06:45.027087 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:06:55.027497 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:07:05.026931 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:07:15.027368 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:07:25.026678 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:07:35.026358 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:07:45.026753 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:07:55.027182 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:08:05.028813 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:08:14.630102 I | mvcc: store.index: compact 1380 2021-12-06 12:08:14.631198 I | mvcc: finished scheduled compaction at 1380 (took 764.614µs) 2021-12-06 12:08:15.028688 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:08:25.027315 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:08:35.027362 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:08:45.026583 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:08:55.026606 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:09:05.027590 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:09:15.027198 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:09:25.027068 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:09:35.027406 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:09:45.028068 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:09:55.027499 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:10:05.027524 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:10:15.026841 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:10:25.028166 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:10:35.028727 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:10:45.027525 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:10:55.028883 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:11:05.028087 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:11:15.027624 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:11:25.027317 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:11:35.026274 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:11:45.027207 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:11:55.028603 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:12:05.027150 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-12-06 12:12:15.028221 I | etcdserver/api/etcdhttp: /health OK (status code 200) * * ==> kernel <== * 12:12:21 up 19 min, 0 users, load average: 0.43, 0.64, 0.41 Linux arkade 4.19.182 #1 SMP Wed Jun 9 00:54:54 UTC 2021 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2020.02.12" * * ==> kube-apiserver [5f4990e27f9868e19f2fbf07f581a49fe76cb2812b2f7d0a8e22295b5394d349] <== * I1206 12:00:02.283668 1 client.go:360] parsed scheme: "passthrough" I1206 12:00:02.283773 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:00:02.283797 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:00:38.322349 1 client.go:360] parsed scheme: "passthrough" I1206 12:00:38.322785 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:00:38.322924 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:01:16.073362 1 client.go:360] parsed scheme: "passthrough" I1206 12:01:16.073878 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:01:16.073997 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:01:59.380958 1 client.go:360] parsed scheme: "passthrough" I1206 12:01:59.380993 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:01:59.381002 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:02:39.151731 1 client.go:360] parsed scheme: "passthrough" I1206 12:02:39.152384 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:02:39.152813 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:03:14.821328 1 client.go:360] parsed scheme: "passthrough" I1206 12:03:14.821838 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:03:14.822171 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:03:53.503527 1 client.go:360] parsed scheme: "passthrough" I1206 12:03:53.503716 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:03:53.503754 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:04:31.765486 1 client.go:360] parsed scheme: "passthrough" I1206 12:04:31.765564 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:04:31.765586 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:05:08.676949 1 client.go:360] parsed scheme: "passthrough" I1206 12:05:08.677052 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:05:08.677094 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:05:51.128461 1 client.go:360] parsed scheme: "passthrough" I1206 12:05:51.128543 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:05:51.128562 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:06:31.434319 1 client.go:360] parsed scheme: "passthrough" I1206 12:06:31.434515 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:06:31.434828 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:07:05.695018 1 client.go:360] parsed scheme: "passthrough" I1206 12:07:05.699274 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:07:05.699291 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:07:49.050483 1 client.go:360] parsed scheme: "passthrough" I1206 12:07:49.050925 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:07:49.051319 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:08:21.341002 1 client.go:360] parsed scheme: "passthrough" I1206 12:08:21.341266 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:08:21.341389 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:08:54.683481 1 client.go:360] parsed scheme: "passthrough" I1206 12:08:54.683508 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:08:54.683514 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:09:37.607539 1 client.go:360] parsed scheme: "passthrough" I1206 12:09:37.607683 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:09:37.607706 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:10:10.514551 1 client.go:360] parsed scheme: "passthrough" I1206 12:10:10.514635 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:10:10.514657 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:10:49.549716 1 client.go:360] parsed scheme: "passthrough" I1206 12:10:49.549743 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:10:49.549751 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:11:19.854831 1 client.go:360] parsed scheme: "passthrough" I1206 12:11:19.854874 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:11:19.854882 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1206 12:11:56.311310 1 client.go:360] parsed scheme: "passthrough" I1206 12:11:56.311343 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1206 12:11:56.311349 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * * ==> kube-controller-manager [add6ca4bb0f3ac784a84dcae0e3318e1b988a2cf9c6b25ef88d23125e144d167] <== * I1206 11:53:33.205798 1 shared_informer.go:247] Caches are synced for ReplicaSet I1206 11:53:33.206998 1 shared_informer.go:247] Caches are synced for crt configmap I1206 11:53:33.208125 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1206 11:53:33.208650 1 shared_informer.go:247] Caches are synced for PV protection I1206 11:53:33.210249 1 shared_informer.go:247] Caches are synced for job I1206 11:53:33.210331 1 shared_informer.go:247] Caches are synced for stateful set I1206 11:53:33.214037 1 shared_informer.go:247] Caches are synced for PVC protection I1206 11:53:33.216647 1 shared_informer.go:247] Caches are synced for ReplicationController I1206 11:53:33.223546 1 shared_informer.go:247] Caches are synced for TTL I1206 11:53:33.226951 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1206 11:53:33.226998 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1206 11:53:33.227009 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1206 11:53:33.227015 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1206 11:53:33.231820 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wxknx" I1206 11:53:33.235260 1 shared_informer.go:247] Caches are synced for taint I1206 11:53:33.235358 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: W1206 11:53:33.235413 1 node_lifecycle_controller.go:1013] Missing timestamp for Node arkade. Assuming now as a timestamp. I1206 11:53:33.235459 1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I1206 11:53:33.235575 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I1206 11:53:33.236251 1 event.go:291] "Event occurred" object="arkade" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node arkade event: Registered Node arkade in Controller" I1206 11:53:33.246543 1 shared_informer.go:247] Caches are synced for ephemeral I1206 11:53:33.248755 1 shared_informer.go:247] Caches are synced for cronjob I1206 11:53:33.255084 1 event.go:291] "Event occurred" object="kube-system/etcd-arkade" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I1206 11:53:33.255478 1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-arkade" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I1206 11:53:33.265167 1 shared_informer.go:247] Caches are synced for bootstrap_signer E1206 11:53:33.267828 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"afcf1865-c26e-424e-a8fa-ff2494f4335a", ResourceVersion:"280", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63774388400, loc:(*time.Location)(0x72fe440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000ddf638), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000ddf650)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0010fbd20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00167ba40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000ddf668), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000ddf680), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0010fbd60)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000e183c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000052448), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005083f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000e43640)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000052588)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again I1206 11:53:33.313303 1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-provisioner" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful" I1206 11:53:33.313325 1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-snapshotter" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful" I1206 11:53:33.313575 1 shared_informer.go:247] Caches are synced for namespace I1206 11:53:33.323981 1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-resizer" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful" I1206 11:53:33.332957 1 event.go:291] "Event occurred" object="kube-system/csi-hostpathplugin" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful" I1206 11:53:33.332984 1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-attacher" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful" I1206 11:53:33.361924 1 shared_informer.go:247] Caches are synced for deployment I1206 11:53:33.376513 1 shared_informer.go:247] Caches are synced for service account I1206 11:53:33.384813 1 shared_informer.go:247] Caches are synced for disruption I1206 11:53:33.384835 1 disruption.go:371] Sending events to api server. I1206 11:53:33.455267 1 shared_informer.go:247] Caches are synced for resource quota I1206 11:53:33.525850 1 shared_informer.go:247] Caches are synced for resource quota I1206 11:53:33.771768 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 1" I1206 11:53:33.777568 1 event.go:291] "Event occurred" object="kube-system/snapshot-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set snapshot-controller-989f9ddc8 to 2" I1206 11:53:33.899962 1 shared_informer.go:247] Caches are synced for garbage collector I1206 11:53:33.900065 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1206 11:53:33.926050 1 event.go:291] "Event occurred" object="kube-system/snapshot-controller-989f9ddc8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: snapshot-controller-989f9ddc8-qmvvs" I1206 11:53:33.933544 1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-mlvxw" I1206 11:53:33.969295 1 event.go:291] "Event occurred" object="kube-system/snapshot-controller-989f9ddc8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: snapshot-controller-989f9ddc8-vxszk" I1206 11:53:33.982076 1 shared_informer.go:247] Caches are synced for garbage collector I1206 11:53:38.235963 1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode. I1206 11:54:03.493852 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for volumesnapshots.snapshot.storage.k8s.io I1206 11:54:03.494124 1 shared_informer.go:240] Waiting for caches to sync for resource quota I1206 11:54:03.594383 1 shared_informer.go:247] Caches are synced for resource quota I1206 11:54:04.035522 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1206 11:54:04.136300 1 shared_informer.go:247] Caches are synced for garbage collector I1206 11:59:03.021536 1 event.go:291] "Event occurred" object="default/postgresql-postgresql" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod postgresql-postgresql-0 in StatefulSet postgresql-postgresql successful" I1206 12:08:42.609291 1 stateful_set.go:419] StatefulSet has been deleted default/postgresql-postgresql I1206 12:08:55.526920 1 event.go:291] "Event occurred" object="default/postgresql-postgresql" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim data-postgresql-postgresql-0 Pod postgresql-postgresql-0 in StatefulSet postgresql-postgresql success" I1206 12:08:55.531910 1 event.go:291] "Event occurred" object="default/postgresql-postgresql" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod postgresql-postgresql-0 in StatefulSet postgresql-postgresql successful" I1206 12:08:55.562951 1 event.go:291] "Event occurred" object="default/data-postgresql-postgresql-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator" I1206 12:08:56.942721 1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-16255071-5e96-4b3a-b4d7-03a75f5bc77a" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^48f17488-568d-11ec-99d4-9ec0b78bb45e") from node "arkade" I1206 12:08:57.506167 1 operation_generator.go:368] AttachVolume.Attach succeeded for volume "pvc-16255071-5e96-4b3a-b4d7-03a75f5bc77a" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^48f17488-568d-11ec-99d4-9ec0b78bb45e") from node "arkade" I1206 12:08:57.507625 1 event.go:291] "Event occurred" object="default/postgresql-postgresql-0" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-16255071-5e96-4b3a-b4d7-03a75f5bc77a\" " * * ==> kube-proxy [4ab1147b826e4cdf37b3ce3402225e859a5f2c74568fba9741000f182475cbb0] <== * I1206 11:53:36.265721 1 node.go:172] Successfully retrieved node IP: 192.168.50.52 I1206 11:53:36.265759 1 server_others.go:140] Detected node IP 192.168.50.52 W1206 11:53:36.265779 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy W1206 11:53:36.452460 1 server_others.go:197] No iptables support for IPv6: exit status 3 I1206 11:53:36.452494 1 server_others.go:208] kube-proxy running in single-stack IPv4 mode I1206 11:53:36.452515 1 server_others.go:212] Using iptables Proxier. I1206 11:53:36.453349 1 server.go:643] Version: v1.21.2 I1206 11:53:36.453574 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1206 11:53:36.453602 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1206 11:53:36.453933 1 config.go:315] Starting service config controller I1206 11:53:36.453944 1 shared_informer.go:240] Waiting for caches to sync for service config I1206 11:53:36.453965 1 config.go:224] Starting endpoint slice config controller I1206 11:53:36.453968 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config W1206 11:53:36.459722 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice W1206 11:53:36.462893 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice I1206 11:53:36.556454 1 shared_informer.go:247] Caches are synced for endpoint slice config I1206 11:53:36.556565 1 shared_informer.go:247] Caches are synced for service config W1206 12:02:47.468447 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice W1206 12:11:28.469563 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice * * ==> kube-scheduler [e76091cd30baf74ec52b36163e61338750d00556bc6dc55408fe064443bdcf4a] <== * I1206 11:53:14.692607 1 serving.go:347] Generated self-signed cert in-memory W1206 11:53:17.484860 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1206 11:53:17.484982 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1206 11:53:17.485000 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous. W1206 11:53:17.485004 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1206 11:53:17.526432 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1206 11:53:17.526452 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1206 11:53:17.526930 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I1206 11:53:17.526990 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1206 11:53:17.529220 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1206 11:53:17.529585 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1206 11:53:17.529704 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1206 11:53:17.529820 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1206 11:53:17.529901 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1206 11:53:17.530110 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1206 11:53:17.530236 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1206 11:53:17.530460 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1206 11:53:17.530536 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E1206 11:53:17.530626 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E1206 11:53:17.530713 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1206 11:53:17.530927 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1206 11:53:17.531024 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1206 11:53:17.531117 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1206 11:53:18.347410 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E1206 11:53:18.414928 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1206 11:53:18.529059 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1206 11:53:18.651795 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E1206 11:53:18.713969 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1206 11:53:18.774817 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope I1206 11:53:19.127080 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2021-12-06 11:52:44 UTC, end at Mon 2021-12-06 12:12:21 UTC. -- Dec 06 12:09:23 arkade kubelet[2703]: E1206 12:09:23.121158 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 20s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:09:25 arkade kubelet[2703]: I1206 12:09:25.886276 2703 clientconn.go:106] parsed scheme: "" Dec 06 12:09:25 arkade kubelet[2703]: I1206 12:09:25.887034 2703 clientconn.go:106] scheme "" not registered, fallback to default scheme Dec 06 12:09:25 arkade kubelet[2703]: I1206 12:09:25.887144 2703 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi-hostpath/csi.sock 0 }] } Dec 06 12:09:25 arkade kubelet[2703]: I1206 12:09:25.887209 2703 clientconn.go:948] ClientConn switching balancer to "pick_first" Dec 06 12:09:25 arkade kubelet[2703]: I1206 12:09:25.887372 2703 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick Dec 06 12:09:26 arkade kubelet[2703]: I1206 12:09:26.881880 2703 scope.go:111] "RemoveContainer" containerID="ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4" Dec 06 12:09:26 arkade kubelet[2703]: E1206 12:09:26.884096 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 20s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:09:41 arkade kubelet[2703]: I1206 12:09:41.927787 2703 scope.go:111] "RemoveContainer" containerID="ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4" Dec 06 12:09:43 arkade kubelet[2703]: I1206 12:09:43.197208 2703 scope.go:111] "RemoveContainer" containerID="ba809c116e26b2c6b87110fe042bc89ba579dc105bfb79784d13ad5b2d6c09a4" Dec 06 12:09:43 arkade kubelet[2703]: I1206 12:09:43.201573 2703 scope.go:111] "RemoveContainer" containerID="ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce" Dec 06 12:09:43 arkade kubelet[2703]: E1206 12:09:43.206379 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 40s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:09:44 arkade kubelet[2703]: I1206 12:09:44.199740 2703 scope.go:111] "RemoveContainer" containerID="ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce" Dec 06 12:09:44 arkade kubelet[2703]: E1206 12:09:44.200657 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 40s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:09:46 arkade kubelet[2703]: I1206 12:09:46.881852 2703 scope.go:111] "RemoveContainer" containerID="ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce" Dec 06 12:09:46 arkade kubelet[2703]: E1206 12:09:46.885023 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 40s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:09:59 arkade kubelet[2703]: I1206 12:09:59.929607 2703 scope.go:111] "RemoveContainer" containerID="ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce" Dec 06 12:09:59 arkade kubelet[2703]: E1206 12:09:59.930430 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 40s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:10:10 arkade kubelet[2703]: I1206 12:10:10.927440 2703 scope.go:111] "RemoveContainer" containerID="ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce" Dec 06 12:10:10 arkade kubelet[2703]: E1206 12:10:10.928702 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 40s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:10:25 arkade kubelet[2703]: I1206 12:10:25.934334 2703 scope.go:111] "RemoveContainer" containerID="ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce" Dec 06 12:10:26 arkade kubelet[2703]: I1206 12:10:26.088868 2703 scope.go:111] "RemoveContainer" containerID="ce0981050541c73940686eb09f9eb1ffadda1fcdb03781c0b89ed2b45e0818ce" Dec 06 12:10:26 arkade kubelet[2703]: I1206 12:10:26.340357 2703 scope.go:111] "RemoveContainer" containerID="c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c" Dec 06 12:10:26 arkade kubelet[2703]: E1206 12:10:26.340647 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:10:27 arkade kubelet[2703]: I1206 12:10:27.341451 2703 scope.go:111] "RemoveContainer" containerID="c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c" Dec 06 12:10:27 arkade kubelet[2703]: E1206 12:10:27.341786 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:10:28 arkade kubelet[2703]: I1206 12:10:28.345278 2703 scope.go:111] "RemoveContainer" containerID="c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c" Dec 06 12:10:28 arkade kubelet[2703]: E1206 12:10:28.346049 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:10:41 arkade kubelet[2703]: I1206 12:10:41.927725 2703 scope.go:111] "RemoveContainer" containerID="c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c" Dec 06 12:10:41 arkade kubelet[2703]: E1206 12:10:41.927991 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:10:43 arkade kubelet[2703]: I1206 12:10:43.129918 2703 clientconn.go:106] parsed scheme: "" Dec 06 12:10:43 arkade kubelet[2703]: I1206 12:10:43.130003 2703 clientconn.go:106] scheme "" not registered, fallback to default scheme Dec 06 12:10:43 arkade kubelet[2703]: I1206 12:10:43.130167 2703 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi-hostpath/csi.sock 0 }] } Dec 06 12:10:43 arkade kubelet[2703]: I1206 12:10:43.130410 2703 clientconn.go:948] ClientConn switching balancer to "pick_first" Dec 06 12:10:43 arkade kubelet[2703]: I1206 12:10:43.130558 2703 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick Dec 06 12:10:54 arkade kubelet[2703]: I1206 12:10:54.928320 2703 scope.go:111] "RemoveContainer" containerID="c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c" Dec 06 12:10:54 arkade kubelet[2703]: E1206 12:10:54.930835 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:11:08 arkade kubelet[2703]: I1206 12:11:08.927596 2703 scope.go:111] "RemoveContainer" containerID="c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c" Dec 06 12:11:08 arkade kubelet[2703]: E1206 12:11:08.927972 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:11:19 arkade kubelet[2703]: I1206 12:11:19.928593 2703 scope.go:111] "RemoveContainer" containerID="c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c" Dec 06 12:11:19 arkade kubelet[2703]: E1206 12:11:19.931948 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:11:30 arkade kubelet[2703]: I1206 12:11:30.927372 2703 scope.go:111] "RemoveContainer" containerID="c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c" Dec 06 12:11:30 arkade kubelet[2703]: E1206 12:11:30.928461 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:11:42 arkade kubelet[2703]: I1206 12:11:42.927620 2703 scope.go:111] "RemoveContainer" containerID="c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c" Dec 06 12:11:42 arkade kubelet[2703]: E1206 12:11:42.928223 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:11:57 arkade kubelet[2703]: I1206 12:11:57.930084 2703 scope.go:111] "RemoveContainer" containerID="c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c" Dec 06 12:11:58 arkade kubelet[2703]: I1206 12:11:58.640196 2703 scope.go:111] "RemoveContainer" containerID="c96a1b5eb3306d64fee7e1790db57b2c430ed5f1e596ac5fcf442fb686ab011c" Dec 06 12:11:58 arkade kubelet[2703]: I1206 12:11:58.640973 2703 scope.go:111] "RemoveContainer" containerID="98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021" Dec 06 12:11:58 arkade kubelet[2703]: E1206 12:11:58.642094 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:12:06 arkade kubelet[2703]: I1206 12:12:06.881462 2703 scope.go:111] "RemoveContainer" containerID="98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021" Dec 06 12:12:06 arkade kubelet[2703]: E1206 12:12:06.881811 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:12:07 arkade kubelet[2703]: I1206 12:12:07.673588 2703 scope.go:111] "RemoveContainer" containerID="98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021" Dec 06 12:12:07 arkade kubelet[2703]: E1206 12:12:07.674073 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:12:18 arkade kubelet[2703]: I1206 12:12:18.927579 2703 scope.go:111] "RemoveContainer" containerID="98483ea8b4f68204d86c696a8ee75f3035140fe214d22175cbafebde8a8d1021" Dec 06 12:12:18 arkade kubelet[2703]: E1206 12:12:18.928704 2703 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"postgresql\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=postgresql pod=postgresql-postgresql-0_default(719f3a2e-884a-4ccc-b5a5-8ccaaf69db85)\"" pod="default/postgresql-postgresql-0" podUID=719f3a2e-884a-4ccc-b5a5-8ccaaf69db85 Dec 06 12:12:20 arkade kubelet[2703]: I1206 12:12:20.136798 2703 clientconn.go:106] parsed scheme: "" Dec 06 12:12:20 arkade kubelet[2703]: I1206 12:12:20.136879 2703 clientconn.go:106] scheme "" not registered, fallback to default scheme Dec 06 12:12:20 arkade kubelet[2703]: I1206 12:12:20.136995 2703 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi-hostpath/csi.sock 0 }] } Dec 06 12:12:20 arkade kubelet[2703]: I1206 12:12:20.137022 2703 clientconn.go:948] ClientConn switching balancer to "pick_first" Dec 06 12:12:20 arkade kubelet[2703]: I1206 12:12:20.137109 2703 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick * * ==> storage-provisioner [f2e42d4ca968a2dfcaf7eb8e469ca28c397e9255eb1acd07cfc1b20cf2b62e6c] <== * I1206 11:53:38.420187 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I1206 11:53:38.431069 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I1206 11:53:38.431580 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I1206 11:53:38.438407 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I1206 11:53:38.438906 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8fe3caf9-ced4-4d02-b2a9-f41ff017c30f", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' arkade_1545a963-f1ef-40bd-8a46-ff2bad698286 became leader I1206 11:53:38.439150 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_arkade_1545a963-f1ef-40bd-8a46-ff2bad698286! I1206 11:53:38.540018 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_arkade_1545a963-f1ef-40bd-8a46-ff2bad698286!