Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[minikube 1.26.x/1.27.x/1.28.x] Fails upon startup with podman using rootless #14400

Closed
jesperpedersen opened this issue Jun 23, 2022 · 21 comments
Labels
co/podman-driver podman driver issues kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@jesperpedersen
Copy link

What Happened?

Using

{
    "container-runtime": "cri-o",
    "driver": "podman",
    "memory": "8192",
    "rootless": true
}

😄 minikube v1.26.0 on Fedora 36
▪ MINIKUBE_ROOTLESS=true
✨ Using the podman driver based on user configuration
📌 Using rootless Podman driver
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.24.1 preload ...
> preloaded-images-k8s-v18-v1...: 473.80 MiB / 473.80 MiB 100.00% 42.14 Mi
> gcr.io/k8s-minikube/kicbase: 386.00 MiB / 386.00 MiB 100.00% 24.25 MiB p
E0623 09:01:31.296248 2614845 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=8192MB) ...
🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name minikube already exists: volume already exists

🔄 Restarting existing podman container for "minikube" ...
😿 Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube

❌ Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: podman container inspect -f minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

Attach the log file

  • ==> Audit <==

  • |---------|------|----------|------|---------|---------------------|----------|
    | Command | Args | Profile | User | Version | Start Time | End Time |
    |---------|------|----------|------|---------|---------------------|----------|
    | start | | minikube | jpedersen | v1.26.0 | 23 Jun 22 09:01 EDT | |
    |---------|------|----------|------|---------|---------------------|----------|

  • ==> Last Start <==

  • Log file created at: 2022/06/23 09:01:14
    Running on machine: localhost
    Binary: Built with gc go1.18.3 for linux/amd64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0623 09:01:14.288768 2614845 out.go:296] Setting OutFile to fd 1 ...
    I0623 09:01:14.288842 2614845 out.go:348] isatty.IsTerminal(1) = true
    I0623 09:01:14.288844 2614845 out.go:309] Setting ErrFile to fd 2...
    I0623 09:01:14.288847 2614845 out.go:348] isatty.IsTerminal(2) = true
    I0623 09:01:14.289234 2614845 root.go:329] Updating PATH: /home/jpedersen/.minikube/bin
    I0623 09:01:14.289675 2614845 out.go:303] Setting JSON to false
    I0623 09:01:14.303388 2614845 start.go:115] hostinfo: {"hostname":"localhost.localdomain","uptime":1280733,"bootTime":1654708541,"procs":490,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"36","kernelVersion":"5.17.12-300.fc36.x86_64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"210e73a9-68eb-4c2f-86ec-60f335bd6933"}
    I0623 09:01:14.303437 2614845 start.go:125] virtualization: kvm host
    I0623 09:01:14.309894 2614845 out.go:177] 😄 minikube v1.26.0 on Fedora 36
    I0623 09:01:14.313124 2614845 out.go:177] ▪ MINIKUBE_ROOTLESS=true
    W0623 09:01:14.313185 2614845 preload.go:295] Failed to list preload files: open /home/jpedersen/.minikube/cache/preloaded-tarball: no such file or directory
    I0623 09:01:14.313201 2614845 notify.go:193] Checking for updates...
    I0623 09:01:14.316410 2614845 driver.go:360] Setting default libvirt URI to qemu:///system
    I0623 09:01:14.472425 2614845 podman.go:123] podman version: 4.1.1
    I0623 09:01:14.478711 2614845 out.go:177] ✨ Using the podman driver based on user configuration
    I0623 09:01:14.481610 2614845 start.go:284] selected driver: podman
    I0623 09:01:14.481617 2614845 start.go:805] validating driver "podman" against
    I0623 09:01:14.481646 2614845 start.go:816] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
    I0623 09:01:14.481793 2614845 cli_runner.go:164] Run: podman system info --format json
    I0623 09:01:14.693353 2614845 info.go:287] podman info: {Host:{BuildahVersion:1.26.1 CgroupVersion:v2 Conmon:{Package:conmon-2.1.0-2.fc36.x86_64 Path:/usr/bin/conmon Version:conmon version 2.1.0, commit: } Distribution:{Distribution:fedora Version:36} MemFree:1655164928 MemTotal:66862510080 OCIRuntime:{Name:crun Package:crun-1.4.5-1.fc36.x86_64 Path:/usr/bin/crun Version:crun version 1.4.5
    commit: c381048530aa750495cf502ddb7181f2ded5b400
    spec: 1.0.0
    +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:17086799872 SwapTotal:17179860992 Arch:amd64 Cpus:16 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.17.12-300.fc36.x86_64 Os:linux Security:{Rootless:true} Uptime:355h 45m 32.99s (Approximately 14.79 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com docker.io quay.io localhost:5000]} Store:{ConfigFile:/home/jpedersen/.config/containers/storage.conf ContainerStore:{Number:1} GraphDriverName:overlay GraphOptions:{} GraphRoot:/home/jpedersen/.local/share/containers/storage GraphStatus:{BackingFilesystem:extfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:6} RunRoot:/run/user/1000/containers VolumePath:/home/jpedersen/.local/share/containers/storage/volumes}}
    I0623 09:01:14.693439 2614845 start_flags.go:296] no existing cluster config was found, will generate one from the flags
    I0623 09:01:14.706172 2614845 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
    I0623 09:01:14.712472 2614845 out.go:177] 📌 Using rootless Podman driver
    I0623 09:01:14.715528 2614845 cni.go:95] Creating CNI manager for ""
    I0623 09:01:14.715533 2614845 cni.go:162] "podman" driver + cri-o runtime found, recommending kindnet
    I0623 09:01:14.715543 2614845 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
    I0623 09:01:14.715550 2614845 start_flags.go:310] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri-o CRISocket: NetworkPlugin:cni FeatureGates:KubeletInUserNamespace=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jpedersen:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
    I0623 09:01:14.718852 2614845 out.go:177] 👍 Starting control plane node minikube in cluster minikube
    I0623 09:01:14.721983 2614845 cache.go:120] Beginning downloading kic base image for podman with cri-o
    I0623 09:01:14.724943 2614845 out.go:177] 🚜 Pulling base image ...
    I0623 09:01:14.730749 2614845 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime cri-o
    I0623 09:01:14.730901 2614845 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 to local cache
    I0623 09:01:14.731495 2614845 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local cache directory
    I0623 09:01:14.731721 2614845 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 to local cache
    I0623 09:01:14.792187 2614845 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.1/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
    I0623 09:01:14.792203 2614845 cache.go:57] Caching tarball of preloaded images
    I0623 09:01:14.792393 2614845 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime cri-o
    I0623 09:01:14.795869 2614845 out.go:177] 💾 Downloading Kubernetes v1.24.1 preload ...
    I0623 09:01:14.798918 2614845 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 ...
    I0623 09:01:14.948593 2614845 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.1/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:4c8ad2429eafc79a0e5a20bdf41ae0bc -> /home/jpedersen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
    I0623 09:01:27.403083 2614845 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 ...
    I0623 09:01:27.403159 2614845 preload.go:256] verifying checksumm of /home/jpedersen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 ...
    I0623 09:01:28.396230 2614845 cache.go:60] Finished verifying existence of preloaded tar for v1.24.1 on cri-o
    I0623 09:01:28.396451 2614845 profile.go:148] Saving config to /home/jpedersen/.minikube/profiles/minikube/config.json ...
    I0623 09:01:28.396464 2614845 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/profiles/minikube/config.json: {Name:mkb9c351ba1576e37af5fa49932a884f1bd885a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
    I0623 09:01:31.296227 2614845 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 as a tarball
    E0623 09:01:31.296248 2614845 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue podman: load kic base image from cache if available for offline mode #8426
    I0623 09:01:31.296267 2614845 cache.go:208] Successfully downloaded all kic artifacts
    I0623 09:01:31.296318 2614845 start.go:352] acquiring machines lock for minikube: {Name:mka018440ecc214ae079b0f3318b8bab19ffd57a Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0623 09:01:31.296461 2614845 start.go:356] acquired machines lock for "minikube" in 123.343µs
    I0623 09:01:31.296493 2614845 start.go:91] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri-o CRISocket: NetworkPlugin:cni FeatureGates:KubeletInUserNamespace=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:cri-o ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jpedersen:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:cri-o ControlPlane:true Worker:true}
    I0623 09:01:31.296608 2614845 start.go:131] createHost starting for "" (driver="podman")
    I0623 09:01:31.303053 2614845 out.go:204] 🔥 Creating podman container (CPUs=2, Memory=8192MB) ...
    I0623 09:01:31.303523 2614845 start.go:165] libmachine.API.Create for "minikube" (driver="podman")
    I0623 09:01:31.303548 2614845 client.go:168] LocalClient.Create starting
    I0623 09:01:31.303715 2614845 main.go:134] libmachine: Creating CA: /home/jpedersen/.minikube/certs/ca.pem
    I0623 09:01:31.582193 2614845 main.go:134] libmachine: Creating client certificate: /home/jpedersen/.minikube/certs/cert.pem
    I0623 09:01:31.747218 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
    I0623 09:01:31.910681 2614845 cli_runner.go:164] Run: podman network inspect minikube --format "{{range .}}{{if eq .Driver "bridge"}}{{(index .Subnets 0).Subnet}},{{(index .Subnets 0).Gateway}}{{end}}{{end}}"
    I0623 09:01:32.056211 2614845 network_create.go:76] Found existing network {name:minikube subnet:0xc00118c750 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:0}
    I0623 09:01:32.056260 2614845 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
    I0623 09:01:32.056411 2614845 cli_runner.go:164] Run: podman ps -a --format {{.Names}}
    I0623 09:01:32.197216 2614845 cli_runner.go:164] Run: podman container inspect minikube --format {{.Config.Labels}}
    I0623 09:01:32.356615 2614845 kic.go:154] Found already existing abandoned minikube container, will try to delete.
    I0623 09:01:32.356787 2614845 cli_runner.go:164] Run: podman container inspect minikube --format={{.State.Status}}
    I0623 09:01:32.518253 2614845 cli_runner.go:164] Run: podman exec --privileged -t minikube /bin/bash -c "sudo init 0"
    I0623 09:01:33.681451 2614845 cli_runner.go:164] Run: podman container inspect minikube --format={{.State.Status}}
    I0623 09:01:33.825755 2614845 oci.go:660] temporary error: container minikube status is Running but expect it to be exited
    I0623 09:01:33.825774 2614845 oci.go:666] Successfully shutdown container minikube
    I0623 09:01:33.826017 2614845 cli_runner.go:164] Run: podman rm -f -v minikube
    I0623 09:01:44.436407 2614845 cli_runner.go:217] Completed: podman rm -f -v minikube: (10.610369038s)
    I0623 09:01:44.436628 2614845 cli_runner.go:164] Run: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
    W0623 09:01:44.511888 2614845 cli_runner.go:211] podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true returned with exit code 125
    I0623 09:01:44.511948 2614845 client.go:171] LocalClient.Create took 13.208391098s
    I0623 09:01:46.513016 2614845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
    I0623 09:01:46.513139 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
    I0623 09:01:46.651830 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
    W0623 09:01:46.691885 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
    I0623 09:01:46.692042 2614845 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
    stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:46.968848 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:47.112367 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:47.154247 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0623 09:01:47.154399 2614845 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:47.694965 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:47.855900 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:47.919498 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
W0623 09:01:47.919719 2614845 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube

W0623 09:01:47.919734 2614845 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:47.919858 2614845 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0623 09:01:47.919956 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:48.060680 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:48.121141 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0623 09:01:48.121271 2614845 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:48.356174 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:48.509568 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:48.566594 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0623 09:01:48.566737 2614845 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:48.914419 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:49.058822 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:49.121049 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0623 09:01:49.121116 2614845 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:49.789137 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:49.932957 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:49.992948 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
W0623 09:01:49.993080 2614845 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube

W0623 09:01:49.993092 2614845 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:49.993099 2614845 start.go:134] duration metric: createHost completed in 18.696484428s
I0623 09:01:49.993117 2614845 start.go:81] releasing machines lock for "minikube", held for 18.696637626s
W0623 09:01:49.993142 2614845 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name minikube already exists: volume already exists
I0623 09:01:49.993769 2614845 cli_runner.go:164] Run: podman container inspect minikube --format={{.State.Status}}
W0623 09:01:50.035055 2614845 cli_runner.go:211] podman container inspect minikube --format={{.State.Status}} returned with exit code 125
I0623 09:01:50.035138 2614845 delete.go:82] Unable to get host status for minikube, assuming it has already been deleted: state: unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
W0623 09:01:50.035330 2614845 out.go:239] 🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name minikube already exists: volume already exists

I0623 09:01:50.035402 2614845 start.go:614] Will try again in 5 seconds ...
I0623 09:01:55.035794 2614845 start.go:352] acquiring machines lock for minikube: {Name:mka018440ecc214ae079b0f3318b8bab19ffd57a Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0623 09:01:55.036083 2614845 start.go:356] acquired machines lock for "minikube" in 261.133µs
I0623 09:01:55.036106 2614845 start.go:94] Skipping create...Using existing machine configuration
I0623 09:01:55.036112 2614845 fix.go:55] fixHost starting:
I0623 09:01:55.036549 2614845 cli_runner.go:164] Run: podman container inspect minikube --format={{.State.Status}}
W0623 09:01:55.077543 2614845 cli_runner.go:211] podman container inspect minikube --format={{.State.Status}} returned with exit code 125
I0623 09:01:55.077578 2614845 fix.go:103] recreateIfNeeded on minikube: state= err=unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:55.077584 2614845 fix.go:108] machineExists: true. err=unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
W0623 09:01:55.077590 2614845 fix.go:129] unexpected machine state, will restart: unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:55.083445 2614845 out.go:177] 🔄 Restarting existing podman container for "minikube" ...
I0623 09:01:55.092255 2614845 cli_runner.go:164] Run: podman start minikube
W0623 09:01:55.125132 2614845 cli_runner.go:211] podman start minikube returned with exit code 125
I0623 09:01:55.125216 2614845 cli_runner.go:164] Run: podman inspect minikube
I0623 09:01:55.292051 2614845 errors.go:84] Postmortem inspect ("podman inspect minikube"): -- stdout --
[
{
"Name": "minikube",
"Driver": "local",
"Mountpoint": "/home/jpedersen/.local/share/containers/storage/volumes/minikube/_data",
"CreatedAt": "2022-06-23T01:48:07.818393788-04:00",
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"name.minikube.sigs.k8s.io": "minikube"
},
"Scope": "local",
"Options": {},
"MountCount": 0
}
]

-- /stdout --
I0623 09:01:55.292202 2614845 cli_runner.go:164] Run: podman logs --timestamps minikube
W0623 09:01:55.357282 2614845 cli_runner.go:211] podman logs --timestamps minikube returned with exit code 125
W0623 09:01:55.357317 2614845 errors.go:89] Failed to get postmortem logs. podman logs --timestamps minikube :podman logs --timestamps minikube: exit status 125
stdout:

stderr:
Error: no container with name or ID "minikube" found: no such container
I0623 09:01:55.357482 2614845 cli_runner.go:164] Run: podman system info --format json
I0623 09:01:55.534338 2614845 info.go:287] podman info: {Host:{BuildahVersion:1.26.1 CgroupVersion:v2 Conmon:{Package:conmon-2.1.0-2.fc36.x86_64 Path:/usr/bin/conmon Version:conmon version 2.1.0, commit: } Distribution:{Distribution:fedora Version:36} MemFree:1159290880 MemTotal:66862510080 OCIRuntime:{Name:crun Package:crun-1.4.5-1.fc36.x86_64 Path:/usr/bin/crun Version:crun version 1.4.5
commit: c381048530aa750495cf502ddb7181f2ded5b400
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:17086799872 SwapTotal:17179860992 Arch:amd64 Cpus:16 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.17.12-300.fc36.x86_64 Os:linux Security:{Rootless:true} Uptime:355h 46m 13.86s (Approximately 14.79 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com docker.io quay.io localhost:5000]} Store:{ConfigFile:/home/jpedersen/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/home/jpedersen/.local/share/containers/storage GraphStatus:{BackingFilesystem:extfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:6} RunRoot:/run/user/1000/containers VolumePath:/home/jpedersen/.local/share/containers/storage/volumes}}
I0623 09:01:55.534397 2614845 errors.go:106] postmortem podman info: {Host:{BuildahVersion:1.26.1 CgroupVersion:v2 Conmon:{Package:conmon-2.1.0-2.fc36.x86_64 Path:/usr/bin/conmon Version:conmon version 2.1.0, commit: } Distribution:{Distribution:fedora Version:36} MemFree:1159290880 MemTotal:66862510080 OCIRuntime:{Name:crun Package:crun-1.4.5-1.fc36.x86_64 Path:/usr/bin/crun Version:crun version 1.4.5
commit: c381048530aa750495cf502ddb7181f2ded5b400
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:17086799872 SwapTotal:17179860992 Arch:amd64 Cpus:16 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.17.12-300.fc36.x86_64 Os:linux Security:{Rootless:true} Uptime:355h 46m 13.86s (Approximately 14.79 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com docker.io quay.io localhost:5000]} Store:{ConfigFile:/home/jpedersen/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/home/jpedersen/.local/share/containers/storage GraphStatus:{BackingFilesystem:extfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:6} RunRoot:/run/user/1000/containers VolumePath:/home/jpedersen/.local/share/containers/storage/volumes}}
I0623 09:01:55.534580 2614845 network_create.go:272] running [podman network inspect minikube] to gather additional debugging logs...
I0623 09:01:55.534609 2614845 cli_runner.go:164] Run: podman network inspect minikube
I0623 09:01:55.668149 2614845 network_create.go:277] output of [podman network inspect minikube]: -- stdout --
[
{
"name": "minikube",
"id": "8b5429f977923686307bb50cfaa110e135e2e878bd5e3dc0d17a293467dcc6f8",
"driver": "bridge",
"network_interface": "podman2",
"created": "2022-06-14T12:51:56.4525086-04:00",
"subnets": [
{
"subnet": "192.168.49.0/24",
"gateway": "192.168.49.1"
}
],
"ipv6_enabled": false,
"internal": false,
"dns_enabled": true,
"labels": {
"created_by.minikube.sigs.k8s.io": "true"
},
"ipam_options": {
"driver": "host-local"
}
}
]

-- /stdout --
I0623 09:01:55.668326 2614845 cli_runner.go:164] Run: podman system info --format json
I0623 09:01:55.843391 2614845 info.go:287] podman info: {Host:{BuildahVersion:1.26.1 CgroupVersion:v2 Conmon:{Package:conmon-2.1.0-2.fc36.x86_64 Path:/usr/bin/conmon Version:conmon version 2.1.0, commit: } Distribution:{Distribution:fedora Version:36} MemFree:1159413760 MemTotal:66862510080 OCIRuntime:{Name:crun Package:crun-1.4.5-1.fc36.x86_64 Path:/usr/bin/crun Version:crun version 1.4.5
commit: c381048530aa750495cf502ddb7181f2ded5b400
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:17086799872 SwapTotal:17179860992 Arch:amd64 Cpus:16 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.17.12-300.fc36.x86_64 Os:linux Security:{Rootless:true} Uptime:355h 46m 14.14s (Approximately 14.79 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com docker.io quay.io localhost:5000]} Store:{ConfigFile:/home/jpedersen/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/home/jpedersen/.local/share/containers/storage GraphStatus:{BackingFilesystem:extfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:6} RunRoot:/run/user/1000/containers VolumePath:/home/jpedersen/.local/share/containers/storage/volumes}}
I0623 09:01:55.844041 2614845 cli_runner.go:164] Run: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
W0623 09:01:55.907583 2614845 cli_runner.go:211] podman container inspect -f {{.NetworkSettings.IPAddress}} minikube returned with exit code 125
I0623 09:01:55.907781 2614845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0623 09:01:55.907916 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:56.045351 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:56.090963 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0623 09:01:56.091111 2614845 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:56.255743 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:56.420380 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:56.478226 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0623 09:01:56.478367 2614845 retry.go:31] will retry after 223.863569ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:56.702925 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:56.846621 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:56.879456 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0623 09:01:56.879617 2614845 retry.go:31] will retry after 450.512921ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:57.330257 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:57.465756 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:57.500734 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
W0623 09:01:57.500886 2614845 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube

W0623 09:01:57.500902 2614845 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:57.501002 2614845 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0623 09:01:57.501070 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:57.636717 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:57.689810 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0623 09:01:57.689948 2614845 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:58.018619 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:58.157473 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:58.192903 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0623 09:01:58.193069 2614845 retry.go:31] will retry after 267.848952ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:58.461631 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:58.599604 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:58.635539 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0623 09:01:58.635754 2614845 retry.go:31] will retry after 495.369669ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:59.131481 2614845 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 09:01:59.268483 2614845 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0623 09:01:59.312101 2614845 cli_runner.go:211] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
W0623 09:01:59.312232 2614845 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube

W0623 09:01:59.312243 2614845 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube
I0623 09:01:59.312250 2614845 fix.go:57] fixHost completed within 4.276138739s
I0623 09:01:59.312256 2614845 start.go:81] releasing machines lock for "minikube", held for 4.276164848s
W0623 09:01:59.312420 2614845 out.go:239] 😿 Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube

I0623 09:01:59.318523 2614845 out.go:177]
W0623 09:01:59.321492 2614845 out.go:239] ❌ Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: podman container inspect -f minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container minikube

W0623 09:01:59.321520 2614845 out.go:239]
W0623 09:01:59.322809 2614845 out.go:239] �[31m╭───────────────────────────────────────────────────────────────────────────────────────────╮�[0m
�[31m│�[0m �[31m│�[0m
�[31m│�[0m 😿 If the above advice does not help, please let us know: �[31m│�[0m
�[31m│�[0m 👉 https://github.com/kubernetes/minikube/issues/new/choose �[31m│�[0m
�[31m│�[0m �[31m│�[0m
�[31m│�[0m Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. �[31m│�[0m
�[31m│�[0m �[31m│�[0m
�[31m╰───────────────────────────────────────────────────────────────────────────────────────────╯�[0m
I0623 09:01:59.327563 2614845 out.go:177]

Operating System

Redhat/Fedora

Driver

Podman

@spowelljr
Copy link
Member

spowelljr commented Jun 23, 2022

Hi @jesperpedersen, thanks for reporting your issue with minikube.

It looks like an existing volume might be causing this issue:
Error: volume with name minikube already exists: volume already exists

Could you try running minikube delete --all to remove the existing volume and then try starting minikube again and see if that resolves your issue, thanks

@spowelljr spowelljr added kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. co/podman-driver podman driver issues labels Jun 23, 2022
@jesperpedersen
Copy link
Author

@spowelljr minikube delete --all and minikube start -- after copying the configuration file:

😄  minikube v1.26.0 on Fedora 36
    ▪ MINIKUBE_ROOTLESS=true
✨  Using the podman driver based on user configuration
📌  Using rootless Podman driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E0623 13:56:21.796341 2655629 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔥  Creating podman container (CPUs=2, Memory=8192MB) ...
🎁  Preparing Kubernetes v1.24.1 on CRI-O 1.22.5 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
💢  initialization failed, will try again: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: 
** stderr ** 
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

** /stderr **: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:

stderr:
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...

💣  Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: 
** stderr ** 
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

** /stderr **: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:

stderr:
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused


╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

❌  Exiting due to GUEST_START: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: 
** stderr ** 
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

** /stderr **: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:

stderr:
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused


╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

with

* 
* ==> Audit <==
* |---------|-------|----------|------|---------|---------------------|---------------------|
| Command | Args  | Profile  | User | Version |     Start Time      |      End Time       |
|---------|-------|----------|------|---------|---------------------|---------------------|
| start   |       | minikube | jpedersen | v1.26.0 | 23 Jun 22 09:01 EDT |                     |
| delete  | --all | minikube | jpedersen | v1.26.0 | 23 Jun 22 13:56 EDT | 23 Jun 22 13:56 EDT |
| start   |       | minikube | jpedersen | v1.26.0 | 23 Jun 22 13:56 EDT |                     |
|---------|-------|----------|------|---------|---------------------|---------------------|

* 
* ==> Last Start <==
* Log file created at: 2022/06/23 13:56:21
Running on machine: localhost
Binary: Built with gc go1.18.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0623 13:56:21.384885 2655629 out.go:296] Setting OutFile to fd 1 ...
I0623 13:56:21.384957 2655629 out.go:348] isatty.IsTerminal(1) = true
I0623 13:56:21.384959 2655629 out.go:309] Setting ErrFile to fd 2...
I0623 13:56:21.384963 2655629 out.go:348] isatty.IsTerminal(2) = true
I0623 13:56:21.385185 2655629 root.go:329] Updating PATH: /home/jpedersen/.minikube/bin
I0623 13:56:21.385371 2655629 out.go:303] Setting JSON to false
I0623 13:56:21.398498 2655629 start.go:115] hostinfo: {"hostname":"localhost.localdomain","uptime":1298440,"bootTime":1654708541,"procs":454,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"36","kernelVersion":"5.17.12-300.fc36.x86_64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"210e73a9-68eb-4c2f-86ec-60f335bd6933"}
I0623 13:56:21.398547 2655629 start.go:125] virtualization: kvm host
I0623 13:56:21.405220 2655629 out.go:177] 😄  minikube v1.26.0 on Fedora 36
I0623 13:56:21.416834 2655629 out.go:177]     ▪ MINIKUBE_ROOTLESS=true
I0623 13:56:21.416710 2655629 notify.go:193] Checking for updates...
I0623 13:56:21.423246 2655629 driver.go:360] Setting default libvirt URI to qemu:///system
I0623 13:56:21.581189 2655629 podman.go:123] podman version: 4.1.1
I0623 13:56:21.590547 2655629 out.go:177] ✨  Using the podman driver based on user configuration
I0623 13:56:21.593463 2655629 start.go:284] selected driver: podman
I0623 13:56:21.593473 2655629 start.go:805] validating driver "podman" against <nil>
I0623 13:56:21.593498 2655629 start.go:816] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0623 13:56:21.593867 2655629 cli_runner.go:164] Run: podman system info --format json
I0623 13:56:21.774799 2655629 info.go:287] podman info: {Host:{BuildahVersion:1.26.1 CgroupVersion:v2 Conmon:{Package:conmon-2.1.0-2.fc36.x86_64 Path:/usr/bin/conmon Version:conmon version 2.1.0, commit: } Distribution:{Distribution:fedora Version:36} MemFree:3274272768 MemTotal:66862510080 OCIRuntime:{Name:crun Package:crun-1.4.5-1.fc36.x86_64 Path:/usr/bin/crun Version:crun version 1.4.5
commit: c381048530aa750495cf502ddb7181f2ded5b400
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:17082081280 SwapTotal:17179860992 Arch:amd64 Cpus:16 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.17.12-300.fc36.x86_64 Os:linux Security:{Rootless:true} Uptime:360h 40m 40.11s (Approximately 15.00 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com docker.io quay.io localhost:5000]} Store:{ConfigFile:/home/jpedersen/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/home/jpedersen/.local/share/containers/storage GraphStatus:{BackingFilesystem:extfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:6} RunRoot:/run/user/1000/containers VolumePath:/home/jpedersen/.local/share/containers/storage/volumes}}
I0623 13:56:21.774920 2655629 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
I0623 13:56:21.776101 2655629 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
I0623 13:56:21.779366 2655629 out.go:177] 📌  Using rootless Podman driver
I0623 13:56:21.782259 2655629 cni.go:95] Creating CNI manager for ""
I0623 13:56:21.782269 2655629 cni.go:162] "podman" driver + cri-o runtime found, recommending kindnet
I0623 13:56:21.782286 2655629 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
I0623 13:56:21.782311 2655629 start_flags.go:310] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri-o CRISocket: NetworkPlugin:cni FeatureGates:KubeletInUserNamespace=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jpedersen:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0623 13:56:21.785699 2655629 out.go:177] 👍  Starting control plane node minikube in cluster minikube
I0623 13:56:21.788577 2655629 cache.go:120] Beginning downloading kic base image for podman with cri-o
I0623 13:56:21.791617 2655629 out.go:177] 🚜  Pulling base image ...
I0623 13:56:21.794693 2655629 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime cri-o
I0623 13:56:21.794778 2655629 preload.go:148] Found local preload: /home/jpedersen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
I0623 13:56:21.794789 2655629 cache.go:57] Caching tarball of preloaded images
I0623 13:56:21.794836 2655629 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 to local cache
I0623 13:56:21.795327 2655629 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local cache directory
I0623 13:56:21.795347 2655629 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local cache directory, skipping pull
I0623 13:56:21.795354 2655629 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 exists in cache, skipping pull
I0623 13:56:21.795382 2655629 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 as a tarball
I0623 13:56:21.795396 2655629 preload.go:174] Found /home/jpedersen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I0623 13:56:21.795432 2655629 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.1 on cri-o
I0623 13:56:21.796057 2655629 profile.go:148] Saving config to /home/jpedersen/.minikube/profiles/minikube/config.json ...
I0623 13:56:21.796086 2655629 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/profiles/minikube/config.json: {Name:mkb9c351ba1576e37af5fa49932a884f1bd885a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
E0623 13:56:21.796341 2655629 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
I0623 13:56:21.796367 2655629 cache.go:208] Successfully downloaded all kic artifacts
I0623 13:56:21.796413 2655629 start.go:352] acquiring machines lock for minikube: {Name:mka018440ecc214ae079b0f3318b8bab19ffd57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0623 13:56:21.796497 2655629 start.go:356] acquired machines lock for "minikube" in 68.419µs
I0623 13:56:21.796514 2655629 start.go:91] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri-o CRISocket: NetworkPlugin:cni FeatureGates:KubeletInUserNamespace=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:cri-o ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jpedersen:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:cri-o ControlPlane:true Worker:true}
I0623 13:56:21.796658 2655629 start.go:131] createHost starting for "" (driver="podman")
I0623 13:56:21.802959 2655629 out.go:204] 🔥  Creating podman container (CPUs=2, Memory=8192MB) ...
I0623 13:56:21.803340 2655629 start.go:165] libmachine.API.Create for "minikube" (driver="podman")
I0623 13:56:21.803372 2655629 client.go:168] LocalClient.Create starting
I0623 13:56:21.803460 2655629 main.go:134] libmachine: Reading certificate data from /home/jpedersen/.minikube/certs/ca.pem
I0623 13:56:21.803577 2655629 main.go:134] libmachine: Decoding PEM data...
I0623 13:56:21.803602 2655629 main.go:134] libmachine: Parsing certificate...
I0623 13:56:21.803689 2655629 main.go:134] libmachine: Reading certificate data from /home/jpedersen/.minikube/certs/cert.pem
I0623 13:56:21.803717 2655629 main.go:134] libmachine: Decoding PEM data...
I0623 13:56:21.803739 2655629 main.go:134] libmachine: Parsing certificate...
I0623 13:56:21.804381 2655629 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 13:56:21.950868 2655629 cli_runner.go:164] Run: podman network inspect minikube --format "{{range .}}{{if eq .Driver "bridge"}}{{(index .Subnets 0).Subnet}},{{(index .Subnets 0).Gateway}}{{end}}{{end}}"
I0623 13:56:22.099485 2655629 network_create.go:76] Found existing network {name:minikube subnet:0xc0015f6630 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:0}
I0623 13:56:22.099522 2655629 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0623 13:56:22.099735 2655629 cli_runner.go:164] Run: podman ps -a --format {{.Names}}
I0623 13:56:22.258620 2655629 cli_runner.go:164] Run: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0623 13:56:22.405741 2655629 oci.go:103] Successfully created a podman volume minikube
I0623 13:56:22.405904 2655629 cli_runner.go:164] Run: podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.32 -d /var/lib
I0623 13:56:22.846596 2655629 oci.go:107] Successfully prepared a podman volume minikube
I0623 13:56:22.846663 2655629 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime cri-o
I0623 13:56:22.846684 2655629 kic.go:179] Starting extracting preloaded images to volume ...
I0623 13:56:22.847250 2655629 cli_runner.go:164] Run: podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/jpedersen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.32 -I lz4 -xf /preloaded.tar -C /extractDir
I0623 13:56:25.626720 2655629 cli_runner.go:217] Completed: podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/jpedersen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.32 -I lz4 -xf /preloaded.tar -C /extractDir: (2.779406283s)
I0623 13:56:25.626749 2655629 kic.go:188] duration metric: took 2.780058 seconds to extract preloaded images to volume
W0623 13:56:25.627006 2655629 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0623 13:56:25.627059 2655629 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I0623 13:56:25.627397 2655629 cli_runner.go:164] Run: podman info --format "'{{json .SecurityOptions}}'"
W0623 13:56:25.732682 2655629 cli_runner.go:211] podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
I0623 13:56:25.733044 2655629 cli_runner.go:164] Run: podman run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec --memory=8192mb -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.32
I0623 13:56:26.070455 2655629 cli_runner.go:164] Run: podman container inspect minikube --format={{.State.Running}}
I0623 13:56:26.203895 2655629 cli_runner.go:164] Run: podman container inspect minikube --format={{.State.Status}}
I0623 13:56:26.346933 2655629 cli_runner.go:164] Run: podman exec minikube stat /var/lib/dpkg/alternatives/iptables
I0623 13:56:26.484364 2655629 oci.go:144] the created container "minikube" has a running status.
I0623 13:56:26.484387 2655629 kic.go:210] Creating ssh key for kic: /home/jpedersen/.minikube/machines/minikube/id_rsa...
I0623 13:56:26.553514 2655629 kic_runner.go:191] podman (temp): /home/jpedersen/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0623 13:56:26.555541 2655629 kic_runner.go:261] Run: /usr/bin/podman cp /tmp/tmpf-memory-asset1400961096 minikube:/home/docker/.ssh/authorized_keys
I0623 13:56:26.795762 2655629 cli_runner.go:164] Run: podman container inspect minikube --format={{.State.Status}}
I0623 13:56:26.955469 2655629 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0623 13:56:26.955479 2655629 kic_runner.go:114] Args: [podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0623 13:56:27.194412 2655629 cli_runner.go:164] Run: podman container inspect minikube --format={{.State.Status}}
I0623 13:56:27.327445 2655629 machine.go:88] provisioning docker machine ...
I0623 13:56:27.327478 2655629 ubuntu.go:169] provisioning hostname "minikube"
I0623 13:56:27.327755 2655629 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 13:56:27.490758 2655629 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0623 13:56:27.625396 2655629 main.go:134] libmachine: Using SSH client type: native
I0623 13:56:27.625678 2655629 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 45577 <nil> <nil>}
I0623 13:56:27.625694 2655629 main.go:134] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0623 13:56:27.788520 2655629 main.go:134] libmachine: SSH cmd err, output: <nil>: minikube

I0623 13:56:27.788675 2655629 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 13:56:27.921799 2655629 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0623 13:56:28.081862 2655629 main.go:134] libmachine: Using SSH client type: native
I0623 13:56:28.082078 2655629 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 45577 <nil> <nil>}
I0623 13:56:28.082103 2655629 main.go:134] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0623 13:56:28.207842 2655629 main.go:134] libmachine: SSH cmd err, output: <nil>: 
I0623 13:56:28.207863 2655629 ubuntu.go:175] set auth options {CertDir:/home/jpedersen/.minikube CaCertPath:/home/jpedersen/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jpedersen/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jpedersen/.minikube/machines/server.pem ServerKeyPath:/home/jpedersen/.minikube/machines/server-key.pem ClientKeyPath:/home/jpedersen/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jpedersen/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jpedersen/.minikube}
I0623 13:56:28.207890 2655629 ubuntu.go:177] setting up certificates
I0623 13:56:28.207902 2655629 provision.go:83] configureAuth start
I0623 13:56:28.208036 2655629 cli_runner.go:164] Run: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0623 13:56:28.361091 2655629 cli_runner.go:164] Run: podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0623 13:56:28.510466 2655629 provision.go:138] copyHostCerts
I0623 13:56:28.510562 2655629 exec_runner.go:151] cp: /home/jpedersen/.minikube/certs/ca.pem --> /home/jpedersen/.minikube/ca.pem (1070 bytes)
I0623 13:56:28.510767 2655629 exec_runner.go:151] cp: /home/jpedersen/.minikube/certs/cert.pem --> /home/jpedersen/.minikube/cert.pem (1115 bytes)
I0623 13:56:28.510881 2655629 exec_runner.go:151] cp: /home/jpedersen/.minikube/certs/key.pem --> /home/jpedersen/.minikube/key.pem (1675 bytes)
I0623 13:56:28.510982 2655629 provision.go:112] generating server cert: /home/jpedersen/.minikube/machines/server.pem ca-key=/home/jpedersen/.minikube/certs/ca.pem private-key=/home/jpedersen/.minikube/certs/ca-key.pem org=jpedersen.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0623 13:56:28.635784 2655629 provision.go:172] copyRemoteCerts
I0623 13:56:28.635902 2655629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0623 13:56:28.635942 2655629 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 13:56:28.784592 2655629 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0623 13:56:28.920169 2655629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:45577 SSHKeyPath:/home/jpedersen/.minikube/machines/minikube/id_rsa Username:docker}
I0623 13:56:29.019989 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes)
I0623 13:56:29.055539 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
I0623 13:56:29.083428 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0623 13:56:29.109453 2655629 provision.go:86] duration metric: configureAuth took 901.538198ms
I0623 13:56:29.109469 2655629 ubuntu.go:193] setting minikube options for container-runtime
I0623 13:56:29.109749 2655629 config.go:178] Loaded profile config "minikube": Driver=podman, ContainerRuntime=cri-o, KubernetesVersion=v1.24.1
I0623 13:56:29.109907 2655629 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 13:56:29.262740 2655629 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0623 13:56:29.412466 2655629 main.go:134] libmachine: Using SSH client type: native
I0623 13:56:29.412707 2655629 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 45577 <nil> <nil>}
I0623 13:56:29.412732 2655629 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I0623 13:56:29.673793 2655629 main.go:134] libmachine: SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

I0623 13:56:29.673805 2655629 machine.go:91] provisioned docker machine in 2.346348772s
I0623 13:56:29.673820 2655629 client.go:171] LocalClient.Create took 7.870435358s
I0623 13:56:29.673831 2655629 start.go:173] duration metric: libmachine.API.Create for "minikube" took 7.870491995s
I0623 13:56:29.673836 2655629 start.go:306] post-start starting for "minikube" (driver="podman")
I0623 13:56:29.673838 2655629 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0623 13:56:29.673927 2655629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0623 13:56:29.673970 2655629 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 13:56:29.812674 2655629 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0623 13:56:29.969132 2655629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:45577 SSHKeyPath:/home/jpedersen/.minikube/machines/minikube/id_rsa Username:docker}
I0623 13:56:30.064344 2655629 ssh_runner.go:195] Run: cat /etc/os-release
I0623 13:56:30.068395 2655629 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0623 13:56:30.068410 2655629 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0623 13:56:30.068415 2655629 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0623 13:56:30.068419 2655629 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0623 13:56:30.068425 2655629 filesync.go:126] Scanning /home/jpedersen/.minikube/addons for local assets ...
I0623 13:56:30.068501 2655629 filesync.go:126] Scanning /home/jpedersen/.minikube/files for local assets ...
I0623 13:56:30.068530 2655629 start.go:309] post-start completed in 394.688734ms
I0623 13:56:30.069042 2655629 cli_runner.go:164] Run: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0623 13:56:30.230278 2655629 cli_runner.go:164] Run: podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0623 13:56:30.393120 2655629 profile.go:148] Saving config to /home/jpedersen/.minikube/profiles/minikube/config.json ...
I0623 13:56:30.393830 2655629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0623 13:56:30.393934 2655629 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 13:56:30.540457 2655629 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0623 13:56:30.700585 2655629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:45577 SSHKeyPath:/home/jpedersen/.minikube/machines/minikube/id_rsa Username:docker}
I0623 13:56:30.797928 2655629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0623 13:56:30.801653 2655629 start.go:134] duration metric: createHost completed in 9.004972085s
I0623 13:56:30.801665 2655629 start.go:81] releasing machines lock for "minikube", held for 9.00516055s
I0623 13:56:30.801785 2655629 cli_runner.go:164] Run: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0623 13:56:30.961529 2655629 cli_runner.go:164] Run: podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0623 13:56:31.094337 2655629 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0623 13:56:31.094394 2655629 ssh_runner.go:195] Run: systemctl --version
I0623 13:56:31.094402 2655629 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 13:56:31.094432 2655629 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 13:56:31.229511 2655629 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0623 13:56:31.233328 2655629 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0623 13:56:31.368466 2655629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:45577 SSHKeyPath:/home/jpedersen/.minikube/machines/minikube/id_rsa Username:docker}
I0623 13:56:31.387229 2655629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:45577 SSHKeyPath:/home/jpedersen/.minikube/machines/minikube/id_rsa Username:docker}
I0623 13:56:31.595553 2655629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0623 13:56:31.625729 2655629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0623 13:56:31.636356 2655629 docker.go:179] disabling docker service ...
I0623 13:56:31.636495 2655629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0623 13:56:31.647712 2655629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0623 13:56:31.658652 2655629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0623 13:56:31.822944 2655629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0623 13:56:32.019849 2655629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0623 13:56:32.029835 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
image-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0623 13:56:32.047081 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.7"|' -i /etc/crio/crio.conf.d/02-crio.conf"
I0623 13:56:32.060934 2655629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0623 13:56:32.075115 2655629 crio.go:137] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:

stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0623 13:56:32.075237 2655629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
W0623 13:56:32.095033 2655629 crio.go:140] "sudo sysctl net.bridge.bridge-nf-call-iptables" failed, which may be ok: sudo modprobe br_netfilter: Process exited with status 1
stdout:

stderr:
modprobe: ERROR: could not insert 'br_netfilter': Operation not permitted
I0623 13:56:32.095174 2655629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0623 13:56:32.107315 2655629 ssh_runner.go:195] Run: uname -r
I0623 13:56:32.109935 2655629 ssh_runner.go:195] Run: sh -euc "(echo 5.17.12-300.fc36.x86_64; echo 5.11) | sort -V | head -n1"
I0623 13:56:32.113108 2655629 ssh_runner.go:195] Run: uname -r
I0623 13:56:32.116165 2655629 ssh_runner.go:195] Run: sh -euc "(echo 5.17.12-300.fc36.x86_64; echo 5.13) | sort -V | head -n1"
I0623 13:56:32.119363 2655629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/crio.service.d
I0623 13:56:32.130995 2655629 ssh_runner.go:362] scp memory --> /etc/systemd/system/crio.service.d/10-rootless.conf (41 bytes)
I0623 13:56:32.146039 2655629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0623 13:56:32.246459 2655629 ssh_runner.go:195] Run: sudo systemctl reload crio
I0623 13:56:32.265193 2655629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0623 13:56:32.277479 2655629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0623 13:56:32.370820 2655629 ssh_runner.go:195] Run: sudo systemctl restart crio
I0623 13:56:32.476403 2655629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0623 13:56:32.579573 2655629 ssh_runner.go:195] Run: sudo systemctl start crio
I0623 13:56:32.595462 2655629 start.go:447] Will wait 60s for socket path /var/run/crio/crio.sock
I0623 13:56:32.595616 2655629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I0623 13:56:32.600266 2655629 start.go:468] Will wait 60s for crictl version
I0623 13:56:32.600410 2655629 ssh_runner.go:195] Run: sudo crictl version
I0623 13:56:32.630565 2655629 start.go:477] Version:  0.1.0
RuntimeName:  cri-o
RuntimeVersion:  1.22.5
RuntimeApiVersion:  v1alpha2
I0623 13:56:32.630634 2655629 ssh_runner.go:195] Run: crio --version
I0623 13:56:32.654075 2655629 ssh_runner.go:195] Run: crio --version
I0623 13:56:32.710417 2655629 out.go:177] 🎁  Preparing Kubernetes v1.24.1 on CRI-O 1.22.5 ...
I0623 13:56:32.713585 2655629 ssh_runner.go:195] Run: grep 192.168.1.18	host.minikube.internal$ /etc/hosts
I0623 13:56:32.716943 2655629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.1.18	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0623 13:56:32.725292 2655629 cli_runner.go:164] Run: podman version --format {{.Version}}
I0623 13:56:32.861403 2655629 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0623 13:56:33.023401 2655629 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime cri-o
I0623 13:56:33.023534 2655629 ssh_runner.go:195] Run: sudo crictl images --output json
I0623 13:56:33.080710 2655629 crio.go:494] all images are preloaded for cri-o runtime.
I0623 13:56:33.080721 2655629 crio.go:413] Images already preloaded, skipping extraction
I0623 13:56:33.080813 2655629 ssh_runner.go:195] Run: sudo crictl images --output json
I0623 13:56:33.108536 2655629 crio.go:494] all images are preloaded for cri-o runtime.
I0623 13:56:33.108547 2655629 cache_images.go:84] Images are preloaded, skipping loading
I0623 13:56:33.108655 2655629 ssh_runner.go:195] Run: crio config
I0623 13:56:33.147368 2655629 cni.go:95] Creating CNI manager for ""
I0623 13:56:33.147381 2655629 cni.go:162] "podman" driver + cri-o runtime found, recommending kindnet
I0623 13:56:33.147393 2655629 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0623 13:56:33.147409 2655629 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:KubeletInUserNamespace=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:KubeletInUserNamespace=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:KubeletInUserNamespace=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0623 13:56:33.147584 2655629 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
    feature-gates: "KubeletInUserNamespace=true"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    feature-gates: "KubeletInUserNamespace=true"
    leader-elect: "false"
scheduler:
  extraArgs:
    feature-gates: "KubeletInUserNamespace=true"
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.1
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%!"(MISSING)
  nodefs.inodesFree: "0%!"(MISSING)
  imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
  maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
  tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
  tcpCloseWaitTimeout: 0s

I0623 13:56:33.147674 2655629 kubeadm.go:961] kubelet [Unit]
Wants=crio.service

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --feature-gates=KubeletInUserNamespace=true --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 --runtime-request-timeout=15m

[Install]
 config:
{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri-o CRISocket: NetworkPlugin:cni FeatureGates:KubeletInUserNamespace=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0623 13:56:33.147773 2655629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
I0623 13:56:33.158491 2655629 binaries.go:44] Found k8s binaries, skipping transfer
I0623 13:56:33.158635 2655629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0623 13:56:33.167939 2655629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
I0623 13:56:33.183576 2655629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0623 13:56:33.198915 2655629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
I0623 13:56:33.214763 2655629 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
I0623 13:56:33.218497 2655629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0623 13:56:33.229595 2655629 certs.go:54] Setting up /home/jpedersen/.minikube/profiles/minikube for IP: 192.168.49.2
I0623 13:56:33.229662 2655629 certs.go:187] generating minikubeCA CA: /home/jpedersen/.minikube/ca.key
I0623 13:56:33.383950 2655629 crypto.go:156] Writing cert to /home/jpedersen/.minikube/ca.crt ...
I0623 13:56:33.383961 2655629 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/ca.crt: {Name:mk9526059556f01932ff5a97d459f6d845c470cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0623 13:56:33.384110 2655629 crypto.go:164] Writing key to /home/jpedersen/.minikube/ca.key ...
I0623 13:56:33.384113 2655629 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/ca.key: {Name:mkb855b2d2b8a2c526e1bec047580c4bf23347f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0623 13:56:33.384162 2655629 certs.go:187] generating proxyClientCA CA: /home/jpedersen/.minikube/proxy-client-ca.key
I0623 13:56:33.488562 2655629 crypto.go:156] Writing cert to /home/jpedersen/.minikube/proxy-client-ca.crt ...
I0623 13:56:33.488570 2655629 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/proxy-client-ca.crt: {Name:mk45d1f61ca8214f9c47437ca66f4b06d8f325f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0623 13:56:33.488711 2655629 crypto.go:164] Writing key to /home/jpedersen/.minikube/proxy-client-ca.key ...
I0623 13:56:33.488714 2655629 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/proxy-client-ca.key: {Name:mk5fc53b24e4504f75dd1488a92f64076eefc9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0623 13:56:33.488783 2655629 certs.go:302] generating minikube-user signed cert: /home/jpedersen/.minikube/profiles/minikube/client.key
I0623 13:56:33.488789 2655629 crypto.go:68] Generating cert /home/jpedersen/.minikube/profiles/minikube/client.crt with IP's: []
I0623 13:56:33.665498 2655629 crypto.go:156] Writing cert to /home/jpedersen/.minikube/profiles/minikube/client.crt ...
I0623 13:56:33.665508 2655629 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/profiles/minikube/client.crt: {Name:mk6bb63c90eb3a00f0e5010112fae207d48de6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0623 13:56:33.665651 2655629 crypto.go:164] Writing key to /home/jpedersen/.minikube/profiles/minikube/client.key ...
I0623 13:56:33.665654 2655629 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/profiles/minikube/client.key: {Name:mkad11ae7f14a15cd9345a7b03db397c47f33700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0623 13:56:33.665698 2655629 certs.go:302] generating minikube signed cert: /home/jpedersen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0623 13:56:33.665707 2655629 crypto.go:68] Generating cert /home/jpedersen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0623 13:56:34.019176 2655629 crypto.go:156] Writing cert to /home/jpedersen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0623 13:56:34.019193 2655629 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk1430be30a66bb5ede04a009161f780e6060e7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0623 13:56:34.019451 2655629 crypto.go:164] Writing key to /home/jpedersen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0623 13:56:34.019456 2655629 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk2035cb4881bb04be402ac553c447c9265f971f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0623 13:56:34.019511 2655629 certs.go:320] copying /home/jpedersen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/jpedersen/.minikube/profiles/minikube/apiserver.crt
I0623 13:56:34.019551 2655629 certs.go:324] copying /home/jpedersen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/jpedersen/.minikube/profiles/minikube/apiserver.key
I0623 13:56:34.019580 2655629 certs.go:302] generating aggregator signed cert: /home/jpedersen/.minikube/profiles/minikube/proxy-client.key
I0623 13:56:34.019588 2655629 crypto.go:68] Generating cert /home/jpedersen/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0623 13:56:34.252268 2655629 crypto.go:156] Writing cert to /home/jpedersen/.minikube/profiles/minikube/proxy-client.crt ...
I0623 13:56:34.252287 2655629 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/profiles/minikube/proxy-client.crt: {Name:mkcf38e734a678f35aee384cd4ceba4a0737796f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0623 13:56:34.252523 2655629 crypto.go:164] Writing key to /home/jpedersen/.minikube/profiles/minikube/proxy-client.key ...
I0623 13:56:34.252527 2655629 lock.go:35] WriteFile acquiring /home/jpedersen/.minikube/profiles/minikube/proxy-client.key: {Name:mk673569a2d2eaa75c25c97b9527b5e9a0dea821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0623 13:56:34.252675 2655629 certs.go:388] found cert: /home/jpedersen/.minikube/certs/home/jpedersen/.minikube/certs/ca-key.pem (1675 bytes)
I0623 13:56:34.252698 2655629 certs.go:388] found cert: /home/jpedersen/.minikube/certs/home/jpedersen/.minikube/certs/ca.pem (1070 bytes)
I0623 13:56:34.252714 2655629 certs.go:388] found cert: /home/jpedersen/.minikube/certs/home/jpedersen/.minikube/certs/cert.pem (1115 bytes)
I0623 13:56:34.252725 2655629 certs.go:388] found cert: /home/jpedersen/.minikube/certs/home/jpedersen/.minikube/certs/key.pem (1675 bytes)
I0623 13:56:34.253111 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0623 13:56:34.276272 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0623 13:56:34.307873 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0623 13:56:34.339256 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0623 13:56:34.370915 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0623 13:56:34.402604 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0623 13:56:34.434215 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0623 13:56:34.460786 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0623 13:56:34.480940 2655629 ssh_runner.go:362] scp /home/jpedersen/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0623 13:56:34.503938 2655629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0623 13:56:34.520459 2655629 ssh_runner.go:195] Run: openssl version
I0623 13:56:34.523879 2655629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0623 13:56:34.538548 2655629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0623 13:56:34.543794 2655629 certs.go:431] hashing: -rw-r--r--. 1 root root 1111 Jun 23 17:56 /usr/share/ca-certificates/minikubeCA.pem
I0623 13:56:34.543901 2655629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0623 13:56:34.551295 2655629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0623 13:56:34.562917 2655629 kubeadm.go:395] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri-o CRISocket: NetworkPlugin:cni FeatureGates:KubeletInUserNamespace=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:cri-o ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jpedersen:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0623 13:56:34.563006 2655629 cri.go:52] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0623 13:56:34.563109 2655629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0623 13:56:34.590582 2655629 cri.go:87] found id: ""
I0623 13:56:34.590681 2655629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0623 13:56:34.601252 2655629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0623 13:56:34.613121 2655629 kubeadm.go:221] ignoring SystemVerification for kubeadm because of podman driver
I0623 13:56:34.613239 2655629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0623 13:56:34.624946 2655629 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0623 13:56:34.624966 2655629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0623 13:56:34.958792 2655629 out.go:204]     ▪ Generating certificates and keys ...
I0623 13:56:36.898718 2655629 out.go:204]     ▪ Booting up control plane ...
I0623 13:56:44.958433 2655629 out.go:204]     ▪ Configuring RBAC rules ...
I0623 13:56:45.554795 2655629 cni.go:95] Creating CNI manager for ""
I0623 13:56:45.554803 2655629 cni.go:162] "podman" driver + cri-o runtime found, recommending kindnet
I0623 13:56:45.558365 2655629 out.go:177] 🔗  Configuring CNI (Container Networking Interface) ...
I0623 13:56:45.562673 2655629 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0623 13:56:45.565890 2655629 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.1/kubectl ...
I0623 13:56:45.565897 2655629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0623 13:56:45.578834 2655629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0623 13:57:06.316861 2655629 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (20.738007103s)
W0623 13:57:06.317000 2655629 out.go:239] 💢  initialization failed, will try again: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: 
** stderr ** 
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

** /stderr **: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:

stderr:
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

I0623 13:57:06.317038 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
I0623 13:57:08.184784 2655629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.867722604s)
I0623 13:57:08.185236 2655629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0623 13:57:08.196994 2655629 kubeadm.go:221] ignoring SystemVerification for kubeadm because of podman driver
I0623 13:57:08.197091 2655629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0623 13:57:08.204837 2655629 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0623 13:57:08.204876 2655629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0623 13:57:08.482931 2655629 out.go:204]     ▪ Generating certificates and keys ...
I0623 13:57:09.437004 2655629 out.go:204]     ▪ Booting up control plane ...
I0623 13:57:16.491322 2655629 out.go:204]     ▪ Configuring RBAC rules ...
I0623 13:57:17.085216 2655629 cni.go:95] Creating CNI manager for ""
I0623 13:57:17.085224 2655629 cni.go:162] "podman" driver + cri-o runtime found, recommending kindnet
I0623 13:57:17.088273 2655629 out.go:177] 🔗  Configuring CNI (Container Networking Interface) ...
I0623 13:57:17.091392 2655629 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0623 13:57:17.094382 2655629 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.1/kubectl ...
I0623 13:57:17.094389 2655629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0623 13:57:17.107533 2655629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0623 13:57:37.763859 2655629 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (20.656297359s)
I0623 13:57:37.763945 2655629 kubeadm.go:397] StartCluster complete in 1m3.201033841s
I0623 13:57:37.763979 2655629 cri.go:52] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0623 13:57:37.764187 2655629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0623 13:57:37.797961 2655629 cri.go:87] found id: "2b7a1f65a1b98c00e94bff051bf3293d6c1c499aa68ba36bf01ac0bd8bc4f323"
I0623 13:57:37.797986 2655629 cri.go:87] found id: ""
I0623 13:57:37.797995 2655629 logs.go:274] 1 containers: [2b7a1f65a1b98c00e94bff051bf3293d6c1c499aa68ba36bf01ac0bd8bc4f323]
I0623 13:57:37.798127 2655629 ssh_runner.go:195] Run: which crictl
I0623 13:57:37.800812 2655629 cri.go:52] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0623 13:57:37.800875 2655629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0623 13:57:37.818649 2655629 cri.go:87] found id: "f8b5647f08186c4b8d0b95b8b8a14501670e1a74dedf998922d665e60a09dd05"
I0623 13:57:37.818660 2655629 cri.go:87] found id: ""
I0623 13:57:37.818663 2655629 logs.go:274] 1 containers: [f8b5647f08186c4b8d0b95b8b8a14501670e1a74dedf998922d665e60a09dd05]
I0623 13:57:37.818721 2655629 ssh_runner.go:195] Run: which crictl
I0623 13:57:37.823015 2655629 cri.go:52] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0623 13:57:37.823112 2655629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0623 13:57:37.842008 2655629 cri.go:87] found id: ""
I0623 13:57:37.842021 2655629 logs.go:274] 0 containers: []
W0623 13:57:37.842027 2655629 logs.go:276] No container was found matching "coredns"
I0623 13:57:37.842033 2655629 cri.go:52] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0623 13:57:37.842136 2655629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0623 13:57:37.873034 2655629 cri.go:87] found id: "26491b28ec679fb49752b6220fcbae6111e1e60f1c55865fa9494eda69246ac8"
I0623 13:57:37.873044 2655629 cri.go:87] found id: ""
I0623 13:57:37.873048 2655629 logs.go:274] 1 containers: [26491b28ec679fb49752b6220fcbae6111e1e60f1c55865fa9494eda69246ac8]
I0623 13:57:37.873130 2655629 ssh_runner.go:195] Run: which crictl
I0623 13:57:37.875473 2655629 cri.go:52] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0623 13:57:37.875570 2655629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0623 13:57:37.911589 2655629 cri.go:87] found id: ""
I0623 13:57:37.911604 2655629 logs.go:274] 0 containers: []
W0623 13:57:37.911612 2655629 logs.go:276] No container was found matching "kube-proxy"
I0623 13:57:37.911620 2655629 cri.go:52] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I0623 13:57:37.911769 2655629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0623 13:57:37.939427 2655629 cri.go:87] found id: ""
I0623 13:57:37.939437 2655629 logs.go:274] 0 containers: []
W0623 13:57:37.939441 2655629 logs.go:276] No container was found matching "kubernetes-dashboard"
I0623 13:57:37.939444 2655629 cri.go:52] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I0623 13:57:37.939502 2655629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0623 13:57:37.970610 2655629 cri.go:87] found id: ""
I0623 13:57:37.970622 2655629 logs.go:274] 0 containers: []
W0623 13:57:37.970629 2655629 logs.go:276] No container was found matching "storage-provisioner"
I0623 13:57:37.970644 2655629 cri.go:52] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0623 13:57:37.970736 2655629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0623 13:57:38.000473 2655629 cri.go:87] found id: "d882df601c552a4d9c20c440c1e1a9a32682ab19a136111878e1d312155e978b"
I0623 13:57:38.000490 2655629 cri.go:87] found id: ""
I0623 13:57:38.000497 2655629 logs.go:274] 1 containers: [d882df601c552a4d9c20c440c1e1a9a32682ab19a136111878e1d312155e978b]
I0623 13:57:38.000592 2655629 ssh_runner.go:195] Run: which crictl
I0623 13:57:38.002515 2655629 logs.go:123] Gathering logs for kube-apiserver [2b7a1f65a1b98c00e94bff051bf3293d6c1c499aa68ba36bf01ac0bd8bc4f323] ...
I0623 13:57:38.002526 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7a1f65a1b98c00e94bff051bf3293d6c1c499aa68ba36bf01ac0bd8bc4f323"
I0623 13:57:38.073227 2655629 logs.go:123] Gathering logs for kube-scheduler [26491b28ec679fb49752b6220fcbae6111e1e60f1c55865fa9494eda69246ac8] ...
I0623 13:57:38.073238 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26491b28ec679fb49752b6220fcbae6111e1e60f1c55865fa9494eda69246ac8"
I0623 13:57:38.105995 2655629 logs.go:123] Gathering logs for kube-controller-manager [d882df601c552a4d9c20c440c1e1a9a32682ab19a136111878e1d312155e978b] ...
I0623 13:57:38.106006 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d882df601c552a4d9c20c440c1e1a9a32682ab19a136111878e1d312155e978b"
I0623 13:57:38.133089 2655629 logs.go:123] Gathering logs for CRI-O ...
I0623 13:57:38.133108 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I0623 13:57:38.229730 2655629 logs.go:123] Gathering logs for dmesg ...
I0623 13:57:38.229741 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0623 13:57:38.247376 2655629 logs.go:123] Gathering logs for describe nodes ...
I0623 13:57:38.247390 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0623 13:57:38.310822 2655629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: 
** stderr ** 
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I0623 13:57:38.310836 2655629 logs.go:123] Gathering logs for container status ...
I0623 13:57:38.310849 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0623 13:57:38.340126 2655629 logs.go:123] Gathering logs for kubelet ...
I0623 13:57:38.340141 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0623 13:57:38.456941 2655629 logs.go:123] Gathering logs for etcd [f8b5647f08186c4b8d0b95b8b8a14501670e1a74dedf998922d665e60a09dd05] ...
I0623 13:57:38.456952 2655629 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8b5647f08186c4b8d0b95b8b8a14501670e1a74dedf998922d665e60a09dd05"
W0623 13:57:38.490558 2655629 out.go:369] Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: 
** stderr ** 
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

** /stderr **: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:

stderr:
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
W0623 13:57:38.490571 2655629 out.go:239] 
W0623 13:57:38.490661 2655629 out.go:239] 💣  Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: 
** stderr ** 
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

** /stderr **: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:

stderr:
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

W0623 13:57:38.490678 2655629 out.go:239] 
W0623 13:57:38.491229 2655629 out.go:239] �[31m╭───────────────────────────────────────────────────────────────────────────────────────────╮�[0m
�[31m│�[0m                                                                                           �[31m│�[0m
�[31m│�[0m    😿  If the above advice does not help, please let us know:                             �[31m│�[0m
�[31m│�[0m    👉  https://github.com/kubernetes/minikube/issues/new/choose                           �[31m│�[0m
�[31m│�[0m                                                                                           �[31m│�[0m
�[31m│�[0m    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    �[31m│�[0m
�[31m│�[0m                                                                                           �[31m│�[0m
�[31m╰───────────────────────────────────────────────────────────────────────────────────────────╯�[0m
I0623 13:57:38.496870 2655629 out.go:177] 
W0623 13:57:38.502716 2655629 out.go:239] ❌  Exiting due to GUEST_START: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: 
** stderr ** 
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

** /stderr **: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:

stderr:
unable to recognize "/var/tmp/minikube/cni.yaml": apiserver is shutting down
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

W0623 13:57:38.502733 2655629 out.go:239] 
W0623 13:57:38.503278 2655629 out.go:239] �[31m╭───────────────────────────────────────────────────────────────────────────────────────────╮�[0m
�[31m│�[0m                                                                                           �[31m│�[0m
�[31m│�[0m    😿  If the above advice does not help, please let us know:                             �[31m│�[0m
�[31m│�[0m    👉  https://github.com/kubernetes/minikube/issues/new/choose                           �[31m│�[0m
�[31m│�[0m                                                                                           �[31m│�[0m
�[31m│�[0m    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    �[31m│�[0m
�[31m│�[0m                                                                                           �[31m│�[0m
�[31m╰───────────────────────────────────────────────────────────────────────────────────────────╯�[0m
I0623 13:57:38.508815 2655629 out.go:177] 

* 
* ==> CRI-O <==
* -- Logs begin at Thu 2022-06-23 17:56:26 UTC, end at Thu 2022-06-23 17:59:11 UTC. --
Jun 23 17:58:12 minikube crio[483]: time="2022-06-23 17:58:12.469341807Z" level=info msg="Checking image status: k8s.gcr.io/kube-controller-manager:v1.24.1" id=cda590a6-690b-4ad3-b0e7-07a90e9850e2 name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:12 minikube crio[483]: time="2022-06-23 17:58:12.470989708Z" level=info msg="Image status: &{0xc0003dbb20 map[]}" id=cda590a6-690b-4ad3-b0e7-07a90e9850e2 name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:12 minikube crio[483]: time="2022-06-23 17:58:12.471592866Z" level=info msg="Checking image status: k8s.gcr.io/kube-controller-manager:v1.24.1" id=c89ba207-d3c7-40b6-a65e-c053a443a785 name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:12 minikube crio[483]: time="2022-06-23 17:58:12.472256519Z" level=info msg="Image status: &{0xc00048df10 map[]}" id=c89ba207-d3c7-40b6-a65e-c053a443a785 name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:12 minikube crio[483]: time="2022-06-23 17:58:12.473388496Z" level=info msg="Creating container: kube-system/kube-controller-manager-minikube/kube-controller-manager" id=9268d0c6-9a6b-49b9-a7b3-9a76e0ca8912 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:12 minikube crio[483]: time="2022-06-23 17:58:12.626627497Z" level=info msg="Created container 0b035029e77cb6d66b2b1665569e1859aca84c61a0e41c94ae75a87292fce6bc: kube-system/kube-controller-manager-minikube/kube-controller-manager" id=9268d0c6-9a6b-49b9-a7b3-9a76e0ca8912 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:12 minikube crio[483]: time="2022-06-23 17:58:12.627123253Z" level=info msg="Starting container: 0b035029e77cb6d66b2b1665569e1859aca84c61a0e41c94ae75a87292fce6bc" id=7a8c6e0a-d123-4d90-b946-8e6648e528bf name=/runtime.v1.RuntimeService/StartContainer
Jun 23 17:58:12 minikube crio[483]: time="2022-06-23 17:58:12.649676976Z" level=info msg="Started container" PID=3665 containerID=0b035029e77cb6d66b2b1665569e1859aca84c61a0e41c94ae75a87292fce6bc description=kube-system/kube-controller-manager-minikube/kube-controller-manager id=7a8c6e0a-d123-4d90-b946-8e6648e528bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=474c77cc655b21bfdfce4c8b3ddc2d2a521a69474d08f512439a9e82b9021b55
Jun 23 17:58:17 minikube crio[483]: time="2022-06-23 17:58:17.326484667Z" level=info msg="Stopping pod sandbox: 50dd32760921e9917f7f4341e1be2c86c770bff895151d8210938735b4d59b87" id=5ee8924b-21e8-4551-af65-cac06cbcfa55 name=/runtime.v1.RuntimeService/StopPodSandbox
Jun 23 17:58:17 minikube crio[483]: time="2022-06-23 17:58:17.326538519Z" level=info msg="Stopped pod sandbox (already stopped): 50dd32760921e9917f7f4341e1be2c86c770bff895151d8210938735b4d59b87" id=5ee8924b-21e8-4551-af65-cac06cbcfa55 name=/runtime.v1.RuntimeService/StopPodSandbox
Jun 23 17:58:17 minikube crio[483]: time="2022-06-23 17:58:17.326894781Z" level=info msg="Removing pod sandbox: 50dd32760921e9917f7f4341e1be2c86c770bff895151d8210938735b4d59b87" id=bf02bed8-a006-4254-a672-bb6dfe63e417 name=/runtime.v1.RuntimeService/RemovePodSandbox
Jun 23 17:58:17 minikube crio[483]: time="2022-06-23 17:58:17.333919451Z" level=info msg="Removed pod sandbox: 50dd32760921e9917f7f4341e1be2c86c770bff895151d8210938735b4d59b87" id=bf02bed8-a006-4254-a672-bb6dfe63e417 name=/runtime.v1.RuntimeService/RemovePodSandbox
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.224661911Z" level=info msg="Running pod sandbox: kube-system/coredns-6d4b75cb6d-8mwq2/POD" id=4db97b42-f03d-4d4d-9818-4e24c13aff69 name=/runtime.v1.RuntimeService/RunPodSandbox
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.230535808Z" level=info msg="Running pod sandbox: kube-system/coredns-6d4b75cb6d-vsl49/POD" id=957972f7-0c74-44e8-be34-0f896dc78e71 name=/runtime.v1.RuntimeService/RunPodSandbox
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.249993067Z" level=info msg="Got pod network &{Name:coredns-6d4b75cb6d-8mwq2 Namespace:kube-system ID:6109a45326e938a6cc7f139079435ab276672aee94397e997e3d5f1de421cd9e UID:c916852c-ef4e-4297-88a7-4ccbbae9750f NetNS:/var/run/netns/a1fec909-6887-4b38-9016-4ea1f92e4bfc Networks:[] RuntimeConfig:map[crio:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.250043181Z" level=info msg="Adding pod kube-system_coredns-6d4b75cb6d-8mwq2 to CNI network \"crio\" (type=bridge)"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.265416320Z" level=info msg="Got pod network &{Name:coredns-6d4b75cb6d-vsl49 Namespace:kube-system ID:615a04fd7f91e4ee43bf003860908c6e57ca6b80570a1c64135bde6b5fb9d77a UID:ae5d196a-e90b-4d8a-832c-5cea821613e7 NetNS:/var/run/netns/a4a7bf89-f6e8-4edc-9ec1-cf026387c60c Networks:[] RuntimeConfig:map[crio:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.265458059Z" level=info msg="Adding pod kube-system_coredns-6d4b75cb6d-vsl49 to CNI network \"crio\" (type=bridge)"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.276747119Z" level=info msg="Got pod network &{Name:coredns-6d4b75cb6d-8mwq2 Namespace:kube-system ID:6109a45326e938a6cc7f139079435ab276672aee94397e997e3d5f1de421cd9e UID:c916852c-ef4e-4297-88a7-4ccbbae9750f NetNS:/var/run/netns/a1fec909-6887-4b38-9016-4ea1f92e4bfc Networks:[] RuntimeConfig:map[crio:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.276849623Z" level=info msg="Checking pod kube-system_coredns-6d4b75cb6d-8mwq2 for CNI network crio (type=bridge)"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.285224972Z" level=info msg="Got pod network &{Name:coredns-6d4b75cb6d-vsl49 Namespace:kube-system ID:615a04fd7f91e4ee43bf003860908c6e57ca6b80570a1c64135bde6b5fb9d77a UID:ae5d196a-e90b-4d8a-832c-5cea821613e7 NetNS:/var/run/netns/a4a7bf89-f6e8-4edc-9ec1-cf026387c60c Networks:[] RuntimeConfig:map[crio:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.285315904Z" level=info msg="Checking pod kube-system_coredns-6d4b75cb6d-vsl49 for CNI network crio (type=bridge)"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.299890796Z" level=info msg="Ran pod sandbox 6109a45326e938a6cc7f139079435ab276672aee94397e997e3d5f1de421cd9e with infra container: kube-system/coredns-6d4b75cb6d-8mwq2/POD" id=4db97b42-f03d-4d4d-9818-4e24c13aff69 name=/runtime.v1.RuntimeService/RunPodSandbox
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.300881717Z" level=info msg="Ran pod sandbox 615a04fd7f91e4ee43bf003860908c6e57ca6b80570a1c64135bde6b5fb9d77a with infra container: kube-system/coredns-6d4b75cb6d-vsl49/POD" id=957972f7-0c74-44e8-be34-0f896dc78e71 name=/runtime.v1.RuntimeService/RunPodSandbox
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.301108184Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.6" id=cbe77423-3ae7-42da-8a19-45c2093e231a name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.301298584Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.6" id=d3d4c235-819d-4198-936f-270b732ba054 name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.301766397Z" level=info msg="Image status: &{0xc000472bd0 map[]}" id=d3d4c235-819d-4198-936f-270b732ba054 name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.301766727Z" level=info msg="Image status: &{0xc000204230 map[]}" id=cbe77423-3ae7-42da-8a19-45c2093e231a name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.302117680Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.6" id=08c79e2a-709e-442d-b488-34ce6b2c44ff name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.302117620Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.6" id=9c9f2cab-2d2f-424e-81a1-e1bdbed97263 name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.302567879Z" level=info msg="Image status: &{0xc00026eaf0 map[]}" id=08c79e2a-709e-442d-b488-34ce6b2c44ff name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.302567659Z" level=info msg="Image status: &{0xc0004733b0 map[]}" id=9c9f2cab-2d2f-424e-81a1-e1bdbed97263 name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.303170187Z" level=info msg="Creating container: kube-system/coredns-6d4b75cb6d-8mwq2/coredns" id=64c68d47-89ba-4c06-bddd-6f8d82c68c96 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.303170637Z" level=info msg="Creating container: kube-system/coredns-6d4b75cb6d-vsl49/coredns" id=384350fc-0311-40fd-8c61-5ddd971e22c6 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.329596500Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c5bc535338313c4491468c6bf1e6ef1ac6a0d1d43bbe1781e403fed75cfade5c/merged/etc/passwd: no such file or directory"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.329632658Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c5bc535338313c4491468c6bf1e6ef1ac6a0d1d43bbe1781e403fed75cfade5c/merged/etc/group: no such file or directory"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.340554986Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/be112e744e9c901ec1d9cb2a07429c2e503ebadba9281e840e6774390f14111a/merged/etc/passwd: no such file or directory"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.340614338Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/be112e744e9c901ec1d9cb2a07429c2e503ebadba9281e840e6774390f14111a/merged/etc/group: no such file or directory"
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.396210103Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-djm98/POD" id=fc002371-76ef-43c5-a1f8-2db0b39ae721 name=/runtime.v1.RuntimeService/RunPodSandbox
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.418670901Z" level=info msg="Ran pod sandbox b80e570ae9ef84426f558c61d9618cbac1da7616ae0594ac53ee78490248549e with infra container: kube-system/kube-proxy-djm98/POD" id=fc002371-76ef-43c5-a1f8-2db0b39ae721 name=/runtime.v1.RuntimeService/RunPodSandbox
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.419264552Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.24.1" id=05e094f9-5873-4f0d-8386-e6066beff637 name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.419795895Z" level=info msg="Image status: &{0xc00030a620 map[]}" id=05e094f9-5873-4f0d-8386-e6066beff637 name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.420328661Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.24.1" id=83be6ded-a726-471b-baee-478a37ad8ffc name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.420837170Z" level=info msg="Image status: &{0xc00030ae00 map[]}" id=83be6ded-a726-471b-baee-478a37ad8ffc name=/runtime.v1.ImageService/ImageStatus
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.421492487Z" level=info msg="Creating container: kube-system/kube-proxy-djm98/kube-proxy" id=3e08f57d-59db-4bba-8944-d236f61890b6 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.445488043Z" level=info msg="Created container ca3b7158c9ad748e7c030a77528f4af31ff533d520912e1063442118910c359a: kube-system/coredns-6d4b75cb6d-8mwq2/coredns" id=64c68d47-89ba-4c06-bddd-6f8d82c68c96 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.445887477Z" level=info msg="Starting container: ca3b7158c9ad748e7c030a77528f4af31ff533d520912e1063442118910c359a" id=2ab3b96f-f155-474b-b3f4-1f34ba2d89cc name=/runtime.v1.RuntimeService/StartContainer
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.448469791Z" level=info msg="Created container dffa8ea344721a744fb293c186037f4d8c71c0d56fdfd441e57a0c48f6301fad: kube-system/coredns-6d4b75cb6d-vsl49/coredns" id=384350fc-0311-40fd-8c61-5ddd971e22c6 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.448948765Z" level=info msg="Starting container: dffa8ea344721a744fb293c186037f4d8c71c0d56fdfd441e57a0c48f6301fad" id=7defd8e1-fc7c-4f54-95d5-612a22297f8c name=/runtime.v1.RuntimeService/StartContainer
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.456144478Z" level=info msg="Started container" PID=3855 containerID=dffa8ea344721a744fb293c186037f4d8c71c0d56fdfd441e57a0c48f6301fad description=kube-system/coredns-6d4b75cb6d-vsl49/coredns id=7defd8e1-fc7c-4f54-95d5-612a22297f8c name=/runtime.v1.RuntimeService/StartContainer sandboxID=615a04fd7f91e4ee43bf003860908c6e57ca6b80570a1c64135bde6b5fb9d77a
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.466560931Z" level=info msg="Started container" PID=3848 containerID=ca3b7158c9ad748e7c030a77528f4af31ff533d520912e1063442118910c359a description=kube-system/coredns-6d4b75cb6d-8mwq2/coredns id=2ab3b96f-f155-474b-b3f4-1f34ba2d89cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=6109a45326e938a6cc7f139079435ab276672aee94397e997e3d5f1de421cd9e
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.522285589Z" level=info msg="Created container a88c9ff6722a3cdaad168dd333749c7403b2b2e49b60e5fe72b65c4d49bd72f1: kube-system/kube-proxy-djm98/kube-proxy" id=3e08f57d-59db-4bba-8944-d236f61890b6 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.522788589Z" level=info msg="Starting container: a88c9ff6722a3cdaad168dd333749c7403b2b2e49b60e5fe72b65c4d49bd72f1" id=533e5c94-1473-423e-8ac7-bbfadaeb07f0 name=/runtime.v1.RuntimeService/StartContainer
Jun 23 17:58:26 minikube crio[483]: time="2022-06-23 17:58:26.531286189Z" level=info msg="Started container" PID=3944 containerID=a88c9ff6722a3cdaad168dd333749c7403b2b2e49b60e5fe72b65c4d49bd72f1 description=kube-system/kube-proxy-djm98/kube-proxy id=533e5c94-1473-423e-8ac7-bbfadaeb07f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b80e570ae9ef84426f558c61d9618cbac1da7616ae0594ac53ee78490248549e
Jun 23 17:58:32 minikube crio[483]: time="2022-06-23 17:58:32.467946599Z" level=info msg="Cleaning up stale resource k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_c9f511b2919261aeb7e6e5dba702e09c_1"
Jun 23 17:58:32 minikube crio[483]: time="2022-06-23 17:58:32.468070192Z" level=info msg="createCtr: removing container ID 332e2f5c3905a7b09edadf35764325131123960aa0d82234d96a2e6e19d518c0 from runtime" id=39e51a89-4287-4fff-b4e8-a199a34c4442 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:32 minikube crio[483]: time="2022-06-23 17:58:32.492647736Z" level=info msg="createCtr: removing container ID 332e2f5c3905a7b09edadf35764325131123960aa0d82234d96a2e6e19d518c0 from runtime" id=39e51a89-4287-4fff-b4e8-a199a34c4442 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:32 minikube crio[483]: time="2022-06-23 17:58:32.492695366Z" level=info msg="createCtr: removing container ID 332e2f5c3905a7b09edadf35764325131123960aa0d82234d96a2e6e19d518c0 from runtime" id=39e51a89-4287-4fff-b4e8-a199a34c4442 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:32 minikube crio[483]: time="2022-06-23 17:58:32.492714101Z" level=info msg="createCtr: removing container ID 332e2f5c3905a7b09edadf35764325131123960aa0d82234d96a2e6e19d518c0 from runtime" id=39e51a89-4287-4fff-b4e8-a199a34c4442 name=/runtime.v1.RuntimeService/CreateContainer
Jun 23 17:58:32 minikube crio[483]: time="2022-06-23 17:58:32.492764977Z" level=info msg="createCtr: removing container ID 332e2f5c3905a7b09edadf35764325131123960aa0d82234d96a2e6e19d518c0 from runtime" id=39e51a89-4287-4fff-b4e8-a199a34c4442 name=/runtime.v1.RuntimeService/CreateContainer

* 
* ==> container status <==
* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
a88c9ff6722a3       beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18   45 seconds ago       Running             kube-proxy                0                   b80e570ae9ef8
dffa8ea344721       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   45 seconds ago       Running             coredns                   0                   615a04fd7f91e
ca3b7158c9ad7       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   45 seconds ago       Running             coredns                   0                   6109a45326e93
0b035029e77cb       b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d   59 seconds ago       Running             kube-controller-manager   4                   474c77cc655b2
82ddb6c17c133       e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693   About a minute ago   Running             kube-apiserver            2                   91297bf3fd1bd
f6532c4e20bde       18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237   About a minute ago   Running             kube-scheduler            3                   1ae64ed89e418
3dae28744c4dd       b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d   About a minute ago   Exited              kube-controller-manager   3                   474c77cc655b2
6b4045f61b6fc       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   About a minute ago   Running             etcd                      3                   69bbc7e59ea59
f8b5647f08186       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   2 minutes ago        Exited              etcd                      2                   556efa3b112a1
2b7a1f65a1b98       e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693   2 minutes ago        Exited              kube-apiserver            1                   160aa7dcf0c86
26491b28ec679       18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237   2 minutes ago        Exited              kube-scheduler            2                   d3f06ef5f3978

* 
* ==> coredns [ca3b7158c9ad748e7c030a77528f4af31ff533d520912e1063442118910c359a] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"

* 
* ==> coredns [dffa8ea344721a744fb293c186037f4d8c71c0d56fdfd441e57a0c48f6301fad] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"

* 
* ==> describe nodes <==
* Name:               minikube
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 23 Jun 2022 17:57:13 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Thu, 23 Jun 2022 17:59:02 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 23 Jun 2022 17:57:58 +0000   Thu, 23 Jun 2022 17:57:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 23 Jun 2022 17:57:58 +0000   Thu, 23 Jun 2022 17:57:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 23 Jun 2022 17:57:58 +0000   Thu, 23 Jun 2022 17:57:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 23 Jun 2022 17:57:58 +0000   Thu, 23 Jun 2022 17:57:58 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.49.2
  Hostname:    minikube
Capacity:
  cpu:                16
  ephemeral-storage:  910502376Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             65295420Ki
  pods:               110
Allocatable:
  cpu:                16
  ephemeral-storage:  910502376Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             65295420Ki
  pods:               110
System Info:
  Machine ID:                 d8902d1345bb469697278da23257a8d2
  System UUID:                71f0f2bc-ac4b-427a-9631-2bc78f4d859b
  Boot ID:                    c0c4ed7a-f9e9-4560-bbe7-2aa2eb4445c7
  Kernel Version:             5.17.12-300.fc36.x86_64
  OS Image:                   Ubuntu 20.04.4 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  cri-o://1.22.5
  Kubelet Version:            v1.24.1
  Kube-Proxy Version:         v1.24.1
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (7 in total)
  Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-6d4b75cb6d-8mwq2            100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     47s
  kube-system                 coredns-6d4b75cb6d-vsl49            100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     47s
  kube-system                 etcd-minikube                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         81s
  kube-system                 kube-apiserver-minikube             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
  kube-system                 kube-controller-manager-minikube    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
  kube-system                 kube-proxy-djm98                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
  kube-system                 kube-scheduler-minikube             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (5%!)(MISSING)   0 (0%!)(MISSING)
  memory             240Mi (0%!)(MISSING)  340Mi (0%!)(MISSING)
  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
Events:
  Type    Reason                   Age                  From             Message
  ----    ------                   ----                 ----             -------
  Normal  Starting                 45s                  kube-proxy       
  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
  Normal  NodeHasSufficientMemory  2m2s (x3 over 2m3s)  kubelet          Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    2m2s (x3 over 2m3s)  kubelet          Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     2m2s (x3 over 2m3s)  kubelet          Node minikube status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
  Normal  Starting                 115s                 kubelet          Starting kubelet.
  Normal  NodeHasSufficientMemory  115s                 kubelet          Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     115s                 kubelet          Node minikube status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
  Normal  NodeReady                74s                  kubelet          Node minikube status is now: NodeReady
  Normal  RegisteredNode           48s                  node-controller  Node minikube event: Registered Node minikube in Controller

* 
* ==> dmesg <==
* [Jun 8 17:15] Expanded resource Reserved due to conflict with PCI Bus 0000:00
[  +0.021896] pci 0000:00:00.2: can't derive routing for PCI INT A
[  +0.000000] pci 0000:00:00.2: PCI INT A: not connected
[  +0.341816] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[  +4.167693] ATPX version 1, functions 0x00000001
[  +0.000028] ATPX Hybrid Graphics
[  +0.027559] amdgpu 0000:03:00.0: amdgpu: PSP runtime database doesn't exist
[  +0.323089] amdgpu: SRAT table not found
[  +0.014537] amdgpu 0000:08:00.0: amdgpu: PSP runtime database doesn't exist
[  +0.658406] amdgpu 0000:08:00.0: amdgpu: SMU driver if version not matched
[  +0.160988] amdgpu: SRAT table not found
[  +0.030668] kauditd_printk_skb: 9 callbacks suppressed
[  +1.364035] systemd-sysv-generator[657]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000222] systemd-sysv-generator[657]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.272552] systemd-journald[680]: File /var/log/journal/210e73a968eb4c2f86ec60f335bd6933/system.journal corrupted or uncleanly shut down, renaming and replacing.
[  +0.146376] soc_button_array ACPI0011:00: Unknown button index 0 upage 01 usage c6, ignoring
[  +0.202254] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-cc-a0-69.ucode failed with error -2
[  +0.030225] iwlwifi 0000:04:00.0: api flags index 2 larger than supported by driver
[  +0.216967] thermal thermal_zone1: failed to read out thermal zone (-61)
[Jun 8 17:16] systemd-journald[680]: File /var/log/journal/210e73a968eb4c2f86ec60f335bd6933/user-1000.journal corrupted or uncleanly shut down, renaming and replacing.
[ +41.214066] Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers

[  +0.000005] CIFS: VFS: Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers
[  +0.379170] CIFS: VFS: Error connecting to socket. Aborting operation.
[  +0.000014] CIFS: VFS: cifs_mount failed w/return code = -111
[Jun 8 17:36] Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers

[  +0.000008] CIFS: VFS: Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers
[  +0.004250] CIFS: VFS: Error connecting to socket. Aborting operation.
[  +0.000011] CIFS: VFS: cifs_mount failed w/return code = -111
[Jun 8 17:38] Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers

[  +0.000009] CIFS: VFS: Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers
[Jun 9 13:34] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
[Jun 9 23:48] systemd-sysv-generator[190466]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000019] systemd-sysv-generator[190466]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[Jun16 02:41] systemd-sysv-generator[1319769]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000017] systemd-sysv-generator[1319769]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[Jun16 02:42] systemd-sysv-generator[1331775]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000025] systemd-sysv-generator[1331775]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[Jun17 10:19] systemd-sysv-generator[1569212]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000016] systemd-sysv-generator[1569212]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[Jun20 06:19] systemd-sysv-generator[2069437]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000021] systemd-sysv-generator[2069437]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[Jun23 05:22] systemd-sysv-generator[2536001]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000018] systemd-sysv-generator[2536001]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.

* 
* ==> etcd [6b4045f61b6fcb69913c66a40eeab4e45fe4df62612d7f91b8d3a5877aa995c9] <==
* {"level":"info","ts":"2022-06-23T17:57:38.744Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
{"level":"info","ts":"2022-06-23T17:57:38.744Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
{"level":"info","ts":"2022-06-23T17:57:38.744Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2022-06-23T17:57:38.744Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-06-23T17:57:38.745Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]}
{"level":"info","ts":"2022-06-23T17:57:38.746Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.3","git-sha":"0452feec7","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":16,"max-cpu-available":16,"member-initialized":true,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2022-06-23T17:57:38.746Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"462.983µs"}
{"level":"info","ts":"2022-06-23T17:57:38.748Z","caller":"etcdserver/server.go:529","msg":"No snapshot found. Recovering WAL from scratch!"}
{"level":"info","ts":"2022-06-23T17:57:38.749Z","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","commit-index":278}
{"level":"info","ts":"2022-06-23T17:57:38.749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"}
{"level":"info","ts":"2022-06-23T17:57:38.749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 2"}
{"level":"info","ts":"2022-06-23T17:57:38.749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 2, commit: 278, applied: 0, lastindex: 278, lastterm: 2]"}
{"level":"warn","ts":"2022-06-23T17:57:38.751Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2022-06-23T17:57:38.752Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":272}
{"level":"info","ts":"2022-06-23T17:57:38.755Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2022-06-23T17:57:38.757Z","caller":"etcdserver/corrupt.go:46","msg":"starting initial corruption check","local-member-id":"aec36adc501070cc","timeout":"7s"}
{"level":"info","ts":"2022-06-23T17:57:38.757Z","caller":"etcdserver/corrupt.go:116","msg":"initial corruption checking passed; no corruption","local-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-06-23T17:57:38.757Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-06-23T17:57:38.757Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-06-23T17:57:38.758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2022-06-23T17:57:38.758Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2022-06-23T17:57:38.758Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-23T17:57:38.758Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-23T17:57:38.760Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-06-23T17:57:38.760Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-23T17:57:38.760Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-23T17:57:38.760Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-06-23T17:57:38.760Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-06-23T17:57:40.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
{"level":"info","ts":"2022-06-23T17:57:40.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
{"level":"info","ts":"2022-06-23T17:57:40.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2022-06-23T17:57:40.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
{"level":"info","ts":"2022-06-23T17:57:40.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
{"level":"info","ts":"2022-06-23T17:57:40.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
{"level":"info","ts":"2022-06-23T17:57:40.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
{"level":"info","ts":"2022-06-23T17:57:40.150Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2022-06-23T17:57:40.151Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-23T17:57:40.151Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-23T17:57:40.151Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-06-23T17:57:40.151Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-06-23T17:57:40.152Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-06-23T17:57:40.152Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}

* 
* ==> etcd [f8b5647f08186c4b8d0b95b8b8a14501670e1a74dedf998922d665e60a09dd05] <==
* {"level":"info","ts":"2022-06-23T17:57:10.672Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
{"level":"info","ts":"2022-06-23T17:57:10.672Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2022-06-23T17:57:10.672Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-06-23T17:57:10.672Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]}
{"level":"info","ts":"2022-06-23T17:57:10.672Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.3","git-sha":"0452feec7","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":16,"max-cpu-available":16,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2022-06-23T17:57:10.676Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.356666ms"}
{"level":"info","ts":"2022-06-23T17:57:10.684Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"}
{"level":"info","ts":"2022-06-23T17:57:10.684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"}
{"level":"info","ts":"2022-06-23T17:57:10.684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"}
{"level":"info","ts":"2022-06-23T17:57:10.684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
{"level":"info","ts":"2022-06-23T17:57:10.684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"}
{"level":"info","ts":"2022-06-23T17:57:10.684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"warn","ts":"2022-06-23T17:57:10.691Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2022-06-23T17:57:10.693Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1}
{"level":"info","ts":"2022-06-23T17:57:10.695Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2022-06-23T17:57:10.698Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-06-23T17:57:10.698Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2022-06-23T17:57:10.698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2022-06-23T17:57:10.698Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2022-06-23T17:57:10.699Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-06-23T17:57:10.699Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-23T17:57:10.699Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-23T17:57:10.699Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-06-23T17:57:10.699Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-06-23T17:57:11.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2022-06-23T17:57:11.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2022-06-23T17:57:11.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2022-06-23T17:57:11.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2022-06-23T17:57:11.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2022-06-23T17:57:11.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2022-06-23T17:57:11.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2022-06-23T17:57:11.685Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2022-06-23T17:57:11.685Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-23T17:57:11.685Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-23T17:57:11.686Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-23T17:57:11.686Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-06-23T17:57:11.686Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-06-23T17:57:11.687Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-06-23T17:57:11.687Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"info","ts":"2022-06-23T17:57:11.689Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-23T17:57:11.689Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-23T17:57:11.689Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-23T17:57:17.580Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-06-23T17:57:17.580Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"minikube","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2022/06/23 17:57:17 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/06/23 17:57:17 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-06-23T17:57:17.586Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-06-23T17:57:17.591Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-23T17:57:17.592Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-23T17:57:17.592Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"minikube","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}

* 
* ==> kernel <==
*  17:59:12 up 15 days, 43 min,  0 users,  load average: 0.98, 0.53, 0.26
Linux minikube 5.17.12-300.fc36.x86_64 #1 SMP PREEMPT Mon May 30 16:56:53 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"

* 
* ==> kube-apiserver [2b7a1f65a1b98c00e94bff051bf3293d6c1c499aa68ba36bf01ac0bd8bc4f323] <==
* W0623 17:57:33.181312       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.218964       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.406418       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.428296       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.529115       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.532436       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.635273       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.638582       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.640930       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.646147       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.692083       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.729934       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.771697       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.781929       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.862817       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.880538       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.880537       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.902360       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.965282       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:33.976566       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
{"level":"warn","ts":"2022-06-23T17:57:34.097Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0041eea80/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
W0623 17:57:34.124158       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:34.139930       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:34.301481       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:34.467217       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:34.527443       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:34.535974       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:34.743749       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:34.938062       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:34.951883       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:35.100745       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:35.157797       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:35.249014       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0623 17:57:37.592156       1 controller.go:210] RemoveEndpoints() timed out
I0623 17:57:37.592291       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0623 17:57:37.592327       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
I0623 17:57:37.592410       1 controller.go:115] Shutting down OpenAPI V3 controller
I0623 17:57:37.592349       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0623 17:57:37.592457       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0623 17:57:37.592335       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
I0623 17:57:37.592470       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0623 17:57:37.592340       1 available_controller.go:503] Shutting down AvailableConditionController
I0623 17:57:37.592349       1 apf_controller.go:326] Shutting down API Priority and Fairness config worker
I0623 17:57:37.592474       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
I0623 17:57:37.592358       1 controller.go:89] Shutting down OpenAPI AggregationController
I0623 17:57:37.592361       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
I0623 17:57:37.592509       1 secure_serving.go:255] Stopped listening on [::]:8443
I0623 17:57:37.592360       1 controller.go:122] Shutting down OpenAPI controller
I0623 17:57:37.592361       1 storage_flowcontrol.go:172] APF bootstrap ensurer is exiting
I0623 17:57:37.592314       1 autoregister_controller.go:165] Shutting down autoregister controller
I0623 17:57:37.592369       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
I0623 17:57:37.592376       1 crd_finalizer.go:278] Shutting down CRDFinalizer
I0623 17:57:37.592377       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0623 17:57:37.592391       1 customresource_discovery_controller.go:245] Shutting down DiscoveryController
I0623 17:57:37.592395       1 establishing_controller.go:87] Shutting down EstablishingController
I0623 17:57:37.592400       1 naming_controller.go:302] Shutting down NamingConditionController
I0623 17:57:37.592401       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
I0623 17:57:37.592403       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0623 17:57:37.592445       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
E0623 17:57:41.632382       1 controller.go:201] Unable to remove endpoints from kubernetes service: Get "https://[::1]:8443/api/v1/namespaces/default/endpoints/kubernetes": dial tcp [::1]:8443: connect: connection refused

* 
* ==> kube-apiserver [82ddb6c17c1338264fbced1ff113860c75c0104ccda5201e13ddcd1a9aa48b12] <==
* W0623 17:57:49.843439       1 genericapiserver.go:557] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0623 17:57:49.848060       1 genericapiserver.go:557] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0623 17:57:49.850624       1 genericapiserver.go:557] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0623 17:57:49.855718       1 genericapiserver.go:557] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0623 17:57:49.855741       1 genericapiserver.go:557] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0623 17:57:49.857045       1 genericapiserver.go:557] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0623 17:57:49.857060       1 genericapiserver.go:557] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0623 17:57:49.860628       1 genericapiserver.go:557] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0623 17:57:49.864059       1 genericapiserver.go:557] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0623 17:57:49.867358       1 genericapiserver.go:557] Skipping API apps/v1beta2 because it has no resources.
W0623 17:57:49.867378       1 genericapiserver.go:557] Skipping API apps/v1beta1 because it has no resources.
W0623 17:57:49.869099       1 genericapiserver.go:557] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
I0623 17:57:49.872327       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0623 17:57:49.872347       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0623 17:57:49.909096       1 genericapiserver.go:557] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
I0623 17:57:50.873791       1 secure_serving.go:210] Serving securely on [::]:8443
I0623 17:57:50.874213       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0623 17:57:50.874470       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0623 17:57:50.874604       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0623 17:57:50.874625       1 controller.go:83] Starting OpenAPI AggregationController
I0623 17:57:50.874666       1 controller.go:80] Starting OpenAPI V3 AggregationController
I0623 17:57:50.874651       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0623 17:57:50.874710       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0623 17:57:50.874715       1 available_controller.go:491] Starting AvailableConditionController
I0623 17:57:50.874694       1 autoregister_controller.go:141] Starting autoregister controller
I0623 17:57:50.874723       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0623 17:57:50.874765       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0623 17:57:50.874765       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0623 17:57:50.874771       1 controller.go:85] Starting OpenAPI V3 controller
I0623 17:57:50.874739       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0623 17:57:50.874815       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0623 17:57:50.874822       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0623 17:57:50.874834       1 naming_controller.go:291] Starting NamingConditionController
I0623 17:57:50.874878       1 crd_finalizer.go:266] Starting CRDFinalizer
I0623 17:57:50.874923       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
I0623 17:57:50.874480       1 apf_controller.go:317] Starting API Priority and Fairness config controller
I0623 17:57:50.874930       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
I0623 17:57:50.874861       1 establishing_controller.go:76] Starting EstablishingController
I0623 17:57:50.874734       1 controller.go:85] Starting OpenAPI controller
I0623 17:57:50.875268       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0623 17:57:50.875344       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0623 17:57:50.875364       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0623 17:57:50.875384       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
I0623 17:57:50.875723       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0623 17:57:50.901979       1 shared_informer.go:262] Caches are synced for node_authorizer
I0623 17:57:50.975310       1 cache.go:39] Caches are synced for autoregister controller
I0623 17:57:50.975452       1 apf_controller.go:322] Running API Priority and Fairness config worker
I0623 17:57:50.975562       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0623 17:57:50.975604       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0623 17:57:50.975663       1 shared_informer.go:262] Caches are synced for crd-autoregister
I0623 17:57:50.975791       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0623 17:57:51.075382       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0623 17:57:51.690473       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0623 17:57:51.879846       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0623 17:58:13.226140       1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0623 17:58:24.884675       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0623 17:58:25.288215       1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0623 17:58:25.432528       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I0623 17:58:25.447527       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0623 17:58:25.685014       1 controller.go:611] quota admission added evaluator for: endpoints

* 
* ==> kube-controller-manager [0b035029e77cb6d66b2b1665569e1859aca84c61a0e41c94ae75a87292fce6bc] <==
* I0623 17:58:24.731780       1 pv_controller_base.go:311] Starting persistent volume controller
I0623 17:58:24.731796       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0623 17:58:24.780714       1 controllermanager.go:593] Started "clusterrole-aggregation"
I0623 17:58:24.780747       1 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
I0623 17:58:24.780760       1 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator
I0623 17:58:24.787419       1 shared_informer.go:255] Waiting for caches to sync for resource quota
W0623 17:58:24.798175       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0623 17:58:24.804231       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0623 17:58:24.805769       1 shared_informer.go:262] Caches are synced for node
I0623 17:58:24.805791       1 range_allocator.go:173] Starting range CIDR allocator
I0623 17:58:24.805798       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0623 17:58:24.805809       1 shared_informer.go:262] Caches are synced for cidrallocator
I0623 17:58:24.810644       1 range_allocator.go:374] Set node minikube PodCIDR to [10.244.0.0/24]
I0623 17:58:24.812574       1 shared_informer.go:262] Caches are synced for endpoint_slice
I0623 17:58:24.816849       1 shared_informer.go:262] Caches are synced for GC
I0623 17:58:24.825256       1 shared_informer.go:262] Caches are synced for job
I0623 17:58:24.827454       1 shared_informer.go:262] Caches are synced for taint
I0623 17:58:24.827528       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
W0623 17:58:24.827626       1 node_lifecycle_controller.go:1014] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0623 17:58:24.827663       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0623 17:58:24.827707       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
I0623 17:58:24.827816       1 event.go:294] "Event occurred" object="minikube" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I0623 17:58:24.830630       1 shared_informer.go:262] Caches are synced for HPA
I0623 17:58:24.830678       1 shared_informer.go:262] Caches are synced for TTL after finished
I0623 17:58:24.830703       1 shared_informer.go:262] Caches are synced for PVC protection
I0623 17:58:24.831494       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0623 17:58:24.831960       1 shared_informer.go:262] Caches are synced for daemon sets
I0623 17:58:24.834535       1 shared_informer.go:262] Caches are synced for deployment
I0623 17:58:24.834607       1 shared_informer.go:262] Caches are synced for expand
I0623 17:58:24.838481       1 shared_informer.go:262] Caches are synced for PV protection
I0623 17:58:24.839293       1 shared_informer.go:262] Caches are synced for disruption
I0623 17:58:24.839317       1 disruption.go:371] Sending events to api server.
I0623 17:58:24.845896       1 shared_informer.go:262] Caches are synced for bootstrap_signer
I0623 17:58:24.849135       1 shared_informer.go:262] Caches are synced for attach detach
I0623 17:58:24.849184       1 shared_informer.go:262] Caches are synced for crt configmap
I0623 17:58:24.856760       1 shared_informer.go:262] Caches are synced for ReplicationController
I0623 17:58:24.859053       1 shared_informer.go:262] Caches are synced for stateful set
I0623 17:58:24.862312       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I0623 17:58:24.862386       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
I0623 17:58:24.862486       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0623 17:58:24.863673       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
I0623 17:58:24.870962       1 shared_informer.go:262] Caches are synced for cronjob
I0623 17:58:24.872139       1 shared_informer.go:262] Caches are synced for TTL
I0623 17:58:24.873264       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I0623 17:58:24.881156       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0623 17:58:24.881242       1 shared_informer.go:262] Caches are synced for ephemeral
I0623 17:58:24.881363       1 shared_informer.go:262] Caches are synced for ReplicaSet
I0623 17:58:24.884313       1 shared_informer.go:262] Caches are synced for endpoint
I0623 17:58:24.951773       1 shared_informer.go:262] Caches are synced for service account
I0623 17:58:24.994100       1 shared_informer.go:262] Caches are synced for namespace
I0623 17:58:25.032752       1 shared_informer.go:262] Caches are synced for persistent volume
I0623 17:58:25.038281       1 shared_informer.go:262] Caches are synced for resource quota
I0623 17:58:25.088107       1 shared_informer.go:262] Caches are synced for resource quota
I0623 17:58:25.290871       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
I0623 17:58:25.441344       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-djm98"
I0623 17:58:25.505271       1 shared_informer.go:262] Caches are synced for garbage collector
I0623 17:58:25.522567       1 shared_informer.go:262] Caches are synced for garbage collector
I0623 17:58:25.522603       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0623 17:58:25.889907       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-8mwq2"
I0623 17:58:25.893689       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vsl49"

* 
* ==> kube-controller-manager [3dae28744c4ddad386ea0f3c408359217995c124d993f7b9f76fed68c00fb7fa] <==
* crypto/tls.(*listener).Accept(0xc001300048)
	/usr/local/go/src/crypto/tls/tls.go:66 +0x2d
net/http.(*Server).Serve(0xc0001908c0, {0x4d0d638, 0xc001300048})
	/usr/local/go/src/net/http/server.go:3039 +0x385
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer.func2()
	vendor/k8s.io/apiserver/pkg/server/secure_serving.go:250 +0x177
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer
	vendor/k8s.io/apiserver/pkg/server/secure_serving.go:240 +0x18a

goroutine 226 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x4cfc9e0, 0xc000a1e0c0}, 0x1, 0xc0000ba300)
	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x135
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4d2e0f0?, 0xdf8475800, 0x0, 0x78?, 0x47f5140?)
	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00095b080?, 0x0?, 0xc0000ba300?)
	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
	vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x24a

goroutine 137 [IO wait]:
internal/poll.runtime_pollWait(0x7f0b3d35e878, 0x72)
	/usr/local/go/src/runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc00052c900?, 0xc000b50000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc00052c900, {0xc000b50000, 0x931, 0x931})
	/usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc00052c900, {0xc000b50000?, 0xc000776300?, 0xc000b50049?})
	/usr/local/go/src/net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc000154220, {0xc000b50000?, 0x0?, 0x3d656000?})
	/usr/local/go/src/net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc00035ee58, {0xc000b50000?, 0x0?, 0x7270660?})
	/usr/local/go/src/crypto/tls/conn.go:784 +0x3d
bytes.(*Buffer).ReadFrom(0xc000af0cf8, {0x4cf55e0, 0xc00035ee58})
	/usr/local/go/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000af0a80, {0x4cfdb00?, 0xc000154220}, 0x8ed?)
	/usr/local/go/src/crypto/tls/conn.go:806 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc000af0a80, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:613 +0x116
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:581
crypto/tls.(*Conn).Read(0xc000af0a80, {0xc000b6f000, 0x1000, 0x9195e0?})
	/usr/local/go/src/crypto/tls/conn.go:1284 +0x16f
bufio.(*Reader).Read(0xc00033c420, {0xc000b58200, 0x9, 0x935fa2?})
	/usr/local/go/src/bufio/bufio.go:236 +0x1b4
io.ReadAtLeast({0x4cf5400, 0xc00033c420}, {0xc000b58200, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:331 +0x9a
io.ReadFull(...)
	/usr/local/go/src/io/io.go:350
k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc000b58200?, 0x9?, 0xc0015ef3b0?}, {0x4cf5400?, 0xc00033c420?})
	vendor/golang.org/x/net/http2/frame.go:237 +0x6e
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000b581c0)
	vendor/golang.org/x/net/http2/frame.go:498 +0x95
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000b45f98)
	vendor/golang.org/x/net/http2/transport.go:2101 +0x130
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000b6c000)
	vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
	vendor/golang.org/x/net/http2/transport.go:725 +0xa65

* 
* ==> kube-proxy [a88c9ff6722a3cdaad168dd333749c7403b2b2e49b60e5fe72b65c4d49bd72f1] <==
* I0623 17:58:26.551570       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0623 17:58:26.560786       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0623 17:58:26.569567       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0623 17:58:26.578892       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0623 17:58:26.587224       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0623 17:58:26.587266       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0623 17:58:26.587299       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0623 17:58:26.597779       1 server_others.go:206] "Using iptables Proxier"
I0623 17:58:26.597802       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0623 17:58:26.597807       1 server_others.go:214] "Creating dualStackProxier for iptables"
I0623 17:58:26.597816       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0623 17:58:26.597830       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0623 17:58:26.597902       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0623 17:58:26.598009       1 server.go:661] "Version info" version="v1.24.1"
I0623 17:58:26.598016       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0623 17:58:26.800459       1 config.go:226] "Starting endpoint slice config controller"
I0623 17:58:26.800509       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0623 17:58:26.800796       1 config.go:317] "Starting service config controller"
I0623 17:58:26.800838       1 shared_informer.go:255] Waiting for caches to sync for service config
I0623 17:58:26.801046       1 config.go:444] "Starting node config controller"
I0623 17:58:26.801061       1 shared_informer.go:255] Waiting for caches to sync for node config
I0623 17:58:26.901253       1 shared_informer.go:262] Caches are synced for service config
I0623 17:58:26.901291       1 shared_informer.go:262] Caches are synced for node config
I0623 17:58:26.901309       1 shared_informer.go:262] Caches are synced for endpoint slice config

* 
* ==> kube-scheduler [26491b28ec679fb49752b6220fcbae6111e1e60f1c55865fa9494eda69246ac8] <==
* I0623 17:57:13.690121       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.1"
I0623 17:57:13.690222       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0623 17:57:13.691276       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0623 17:57:13.691298       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0623 17:57:13.691323       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0623 17:57:13.691367       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0623 17:57:13.692544       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0623 17:57:13.692585       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0623 17:57:13.692593       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0623 17:57:13.692615       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0623 17:57:13.692606       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0623 17:57:13.692653       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0623 17:57:13.692636       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0623 17:57:13.692698       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0623 17:57:13.692715       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0623 17:57:13.692714       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0623 17:57:13.692653       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0623 17:57:13.692732       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0623 17:57:13.692660       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0623 17:57:13.692747       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0623 17:57:13.692683       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0623 17:57:13.692802       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0623 17:57:13.692887       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0623 17:57:13.692816       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0623 17:57:13.693087       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0623 17:57:13.693107       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0623 17:57:13.693122       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0623 17:57:13.693150       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0623 17:57:13.693439       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0623 17:57:13.693471       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0623 17:57:13.693515       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0623 17:57:13.693524       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0623 17:57:13.693518       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0623 17:57:13.693541       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0623 17:57:13.693583       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0623 17:57:13.693612       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0623 17:57:14.634052       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0623 17:57:14.634084       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0623 17:57:14.695329       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0623 17:57:14.695380       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0623 17:57:14.708226       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0623 17:57:14.708251       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0623 17:57:14.776998       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0623 17:57:14.777041       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0623 17:57:14.826572       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0623 17:57:14.826604       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0623 17:57:14.899553       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0623 17:57:14.899590       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0623 17:57:14.926100       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0623 17:57:14.926141       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0623 17:57:14.943529       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0623 17:57:14.943557       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0623 17:57:14.954457       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0623 17:57:14.954476       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0623 17:57:15.026546       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0623 17:57:15.026587       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0623 17:57:17.591429       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0623 17:57:17.591499       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0623 17:57:17.591591       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0623 17:57:17.591593       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"

* 
* ==> kube-scheduler [f6532c4e20bde66a5c236208b38499c34e77b5a9f5ae0c10c25ec4362e4adac7] <==
* I0623 17:57:48.967368       1 serving.go:348] Generated self-signed cert in-memory
W0623 17:57:50.893997       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0623 17:57:50.894104       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0623 17:57:50.894135       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0623 17:57:50.894157       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0623 17:57:50.903065       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.1"
I0623 17:57:50.903096       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0623 17:57:50.904715       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0623 17:57:50.904735       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0623 17:57:50.904904       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0623 17:57:50.904965       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0623 17:57:51.005787       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file

* 
* ==> kubelet <==
* -- Logs begin at Thu 2022-06-23 17:56:26 UTC, end at Thu 2022-06-23 17:59:12 UTC. --
Jun 23 17:57:48 minikube kubelet[2969]: I0623 17:57:48.533987    2969 status_manager.go:664] "Failed to get status for pod" podUID=893d6fb85ed24c7b3f83493318ed21f6 pod="kube-system/kube-scheduler-minikube" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube\": dial tcp 192.168.49.2:8443: connect: connection refused"
Jun 23 17:57:48 minikube kubelet[2969]: E0623 17:57:48.534021    2969 kubelet.go:1690] "Failed creating a mirror pod for" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods\": dial tcp 192.168.49.2:8443: connect: connection refused" pod="kube-system/kube-scheduler-minikube"
Jun 23 17:57:48 minikube kubelet[2969]: I0623 17:57:48.534677    2969 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="160aa7dcf0c86d7adc03ae7716d69cdff484b524413f2170bcd82aba7b8f1587"
Jun 23 17:57:48 minikube kubelet[2969]: I0623 17:57:48.535286    2969 status_manager.go:664] "Failed to get status for pod" podUID=dcbdcba9ae41d79706f051a802804f13 pod="kube-system/kube-apiserver-minikube" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube\": dial tcp 192.168.49.2:8443: connect: connection refused"
Jun 23 17:57:48 minikube kubelet[2969]: E0623 17:57:48.535343    2969 kubelet.go:1690] "Failed creating a mirror pod for" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods\": dial tcp 192.168.49.2:8443: connect: connection refused" pod="kube-system/kube-apiserver-minikube"
Jun 23 17:57:48 minikube kubelet[2969]: W0623 17:57:48.575820    2969 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod893d6fb85ed24c7b3f83493318ed21f6.slice/crio-1ae64ed89e418215fbd10cf1d04fbee461525244fb60d424a3d1b383a7e48454.scope WatchSource:0}: Error finding container 1ae64ed89e418215fbd10cf1d04fbee461525244fb60d424a3d1b383a7e48454: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc00077e150 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:57:48 minikube kubelet[2969]: W0623 17:57:48.583298    2969 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcbdcba9ae41d79706f051a802804f13.slice/crio-91297bf3fd1bd3ffc9299bd0414e910c4e6f736db7082088c48be6e670e78751.scope WatchSource:0}: Error finding container 91297bf3fd1bd3ffc9299bd0414e910c4e6f736db7082088c48be6e670e78751: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000c7a030 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:57:48 minikube kubelet[2969]: W0623 17:57:48.583923    2969 manager.go:1176] Failed to process watch event {EventType:0 Name:/pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcbdcba9ae41d79706f051a802804f13.slice/crio-91297bf3fd1bd3ffc9299bd0414e910c4e6f736db7082088c48be6e670e78751.scope WatchSource:0}: Error finding container 91297bf3fd1bd3ffc9299bd0414e910c4e6f736db7082088c48be6e670e78751: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000c7a060 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:57:51 minikube kubelet[2969]: I0623 17:57:51.545992    2969 scope.go:110] "RemoveContainer" containerID="d882df601c552a4d9c20c440c1e1a9a32682ab19a136111878e1d312155e978b"
Jun 23 17:57:51 minikube kubelet[2969]: I0623 17:57:51.554039    2969 scope.go:110] "RemoveContainer" containerID="3dae28744c4ddad386ea0f3c408359217995c124d993f7b9f76fed68c00fb7fa"
Jun 23 17:57:51 minikube kubelet[2969]: E0623 17:57:51.554808    2969 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(c9f511b2919261aeb7e6e5dba702e09c)\"" pod="kube-system/kube-controller-manager-minikube" podUID=c9f511b2919261aeb7e6e5dba702e09c
Jun 23 17:57:51 minikube kubelet[2969]: E0623 17:57:51.560164    2969 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube"
Jun 23 17:57:51 minikube kubelet[2969]: E0623 17:57:51.563106    2969 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube"
Jun 23 17:57:52 minikube kubelet[2969]: E0623 17:57:52.558520    2969 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube"
Jun 23 17:57:52 minikube kubelet[2969]: I0623 17:57:52.558628    2969 scope.go:110] "RemoveContainer" containerID="3dae28744c4ddad386ea0f3c408359217995c124d993f7b9f76fed68c00fb7fa"
Jun 23 17:57:52 minikube kubelet[2969]: E0623 17:57:52.559460    2969 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(c9f511b2919261aeb7e6e5dba702e09c)\"" pod="kube-system/kube-controller-manager-minikube" podUID=c9f511b2919261aeb7e6e5dba702e09c
Jun 23 17:57:52 minikube kubelet[2969]: E0623 17:57:52.561265    2969 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube"
Jun 23 17:57:58 minikube kubelet[2969]: I0623 17:57:58.043542    2969 scope.go:110] "RemoveContainer" containerID="3dae28744c4ddad386ea0f3c408359217995c124d993f7b9f76fed68c00fb7fa"
Jun 23 17:57:58 minikube kubelet[2969]: E0623 17:57:58.044351    2969 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(c9f511b2919261aeb7e6e5dba702e09c)\"" pod="kube-system/kube-controller-manager-minikube" podUID=c9f511b2919261aeb7e6e5dba702e09c
Jun 23 17:57:58 minikube kubelet[2969]: I0623 17:57:58.201937    2969 reconciler.go:157] "Reconciler: start to sync state"
Jun 23 17:57:59 minikube kubelet[2969]: I0623 17:57:59.257539    2969 scope.go:110] "RemoveContainer" containerID="3dae28744c4ddad386ea0f3c408359217995c124d993f7b9f76fed68c00fb7fa"
Jun 23 17:57:59 minikube kubelet[2969]: E0623 17:57:59.258351    2969 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(c9f511b2919261aeb7e6e5dba702e09c)\"" pod="kube-system/kube-controller-manager-minikube" podUID=c9f511b2919261aeb7e6e5dba702e09c
Jun 23 17:58:12 minikube kubelet[2969]: I0623 17:58:12.468802    2969 scope.go:110] "RemoveContainer" containerID="3dae28744c4ddad386ea0f3c408359217995c124d993f7b9f76fed68c00fb7fa"
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.344679    2969 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a9f8bdd5982e8af3b06018124453dfc6e12f38d0bed19ac0279860b9d51f6bec/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a9f8bdd5982e8af3b06018124453dfc6e12f38d0bed19ac0279860b9d51f6bec/diff: no such file or directory, extraDiskErr: <nil>
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.347937    2969 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a9f8bdd5982e8af3b06018124453dfc6e12f38d0bed19ac0279860b9d51f6bec/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a9f8bdd5982e8af3b06018124453dfc6e12f38d0bed19ac0279860b9d51f6bec/diff: no such file or directory, extraDiskErr: <nil>
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.402342    2969 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod893d6fb85ed24c7b3f83493318ed21f6.slice/crio-b67da6057419d1b53fecf5e1e322dfb78a00dd7af52b90eb89f697ab9548d2b9.scope: Error finding container b67da6057419d1b53fecf5e1e322dfb78a00dd7af52b90eb89f697ab9548d2b9: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000c7a1c8 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.402945    2969 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9f511b2919261aeb7e6e5dba702e09c.slice/crio-6e7a5f9f2d9db20b47e1202cc2618bed98bae52664bba1137a4eab03ac47670a.scope: Error finding container 6e7a5f9f2d9db20b47e1202cc2618bed98bae52664bba1137a4eab03ac47670a: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc001254600 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.403626    2969 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9f511b2919261aeb7e6e5dba702e09c.slice/crio-1325c3b7ac3434b76a1224243c67005455dbd992cb6ac5fde5d749002c1d6d90.scope: Error finding container 1325c3b7ac3434b76a1224243c67005455dbd992cb6ac5fde5d749002c1d6d90: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000c7a240 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.404083    2969 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod906edd533192a4db2396a938662a5271.slice/crio-4d57ec0b15602d225eaabbb18f549c1c5456c9f748e655749c45e13f23fe68ce.scope: Error finding container 4d57ec0b15602d225eaabbb18f549c1c5456c9f748e655749c45e13f23fe68ce: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc001254690 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.405975    2969 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod906edd533192a4db2396a938662a5271.slice/crio-3687a589e5b816813ca876a9535c56d6686e57809b5e9fd06ffe9ed3df2f7260.scope: Error finding container 3687a589e5b816813ca876a9535c56d6686e57809b5e9fd06ffe9ed3df2f7260: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0004423c0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.406412    2969 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcbdcba9ae41d79706f051a802804f13.slice/crio-2ebe0e5b1ab3c676336bab7893675f2b02f5c9bd1a414a3c79af26026dad7365.scope: Error finding container 2ebe0e5b1ab3c676336bab7893675f2b02f5c9bd1a414a3c79af26026dad7365: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000f50db0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.406818    2969 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod906edd533192a4db2396a938662a5271.slice/crio-3687a589e5b816813ca876a9535c56d6686e57809b5e9fd06ffe9ed3df2f7260.scope: Error finding container 3687a589e5b816813ca876a9535c56d6686e57809b5e9fd06ffe9ed3df2f7260: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000442420 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.407114    2969 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod893d6fb85ed24c7b3f83493318ed21f6.slice/crio-08744bda7f8ac2449a7068f7c36dcde6842cbd6ec9e69b76fb6da7671f62acda.scope: Error finding container 08744bda7f8ac2449a7068f7c36dcde6842cbd6ec9e69b76fb6da7671f62acda: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000c7a3c0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.407508    2969 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9f511b2919261aeb7e6e5dba702e09c.slice/crio-6e7a5f9f2d9db20b47e1202cc2618bed98bae52664bba1137a4eab03ac47670a.scope: Error finding container 6e7a5f9f2d9db20b47e1202cc2618bed98bae52664bba1137a4eab03ac47670a: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000c7a3f0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.408074    2969 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9f511b2919261aeb7e6e5dba702e09c.slice/crio-1325c3b7ac3434b76a1224243c67005455dbd992cb6ac5fde5d749002c1d6d90.scope: Error finding container 1325c3b7ac3434b76a1224243c67005455dbd992cb6ac5fde5d749002c1d6d90: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000442540 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.408512    2969 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod906edd533192a4db2396a938662a5271.slice/crio-4d57ec0b15602d225eaabbb18f549c1c5456c9f748e655749c45e13f23fe68ce.scope: Error finding container 4d57ec0b15602d225eaabbb18f549c1c5456c9f748e655749c45e13f23fe68ce: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000c7a450 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.410842    2969 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod893d6fb85ed24c7b3f83493318ed21f6.slice/crio-08744bda7f8ac2449a7068f7c36dcde6842cbd6ec9e69b76fb6da7671f62acda.scope: Error finding container 08744bda7f8ac2449a7068f7c36dcde6842cbd6ec9e69b76fb6da7671f62acda: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000442888 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.411274    2969 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcbdcba9ae41d79706f051a802804f13.slice/crio-2ebe0e5b1ab3c676336bab7893675f2b02f5c9bd1a414a3c79af26026dad7365.scope: Error finding container 2ebe0e5b1ab3c676336bab7893675f2b02f5c9bd1a414a3c79af26026dad7365: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000c7a5b8 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:17 minikube kubelet[2969]: E0623 17:58:17.411628    2969 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod893d6fb85ed24c7b3f83493318ed21f6.slice/crio-b67da6057419d1b53fecf5e1e322dfb78a00dd7af52b90eb89f697ab9548d2b9.scope: Error finding container b67da6057419d1b53fecf5e1e322dfb78a00dd7af52b90eb89f697ab9548d2b9: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc001053b48 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:24 minikube kubelet[2969]: I0623 17:58:24.829712    2969 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jun 23 17:58:24 minikube kubelet[2969]: I0623 17:58:24.830505    2969 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jun 23 17:58:25 minikube kubelet[2969]: I0623 17:58:25.445646    2969 topology_manager.go:200] "Topology Admit Handler"
Jun 23 17:58:25 minikube kubelet[2969]: I0623 17:58:25.578186    2969 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9-xtables-lock\") pod \"kube-proxy-djm98\" (UID: \"8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9\") " pod="kube-system/kube-proxy-djm98"
Jun 23 17:58:25 minikube kubelet[2969]: I0623 17:58:25.578240    2969 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9-kube-proxy\") pod \"kube-proxy-djm98\" (UID: \"8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9\") " pod="kube-system/kube-proxy-djm98"
Jun 23 17:58:25 minikube kubelet[2969]: I0623 17:58:25.578315    2969 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9-lib-modules\") pod \"kube-proxy-djm98\" (UID: \"8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9\") " pod="kube-system/kube-proxy-djm98"
Jun 23 17:58:25 minikube kubelet[2969]: I0623 17:58:25.578350    2969 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9vwk\" (UniqueName: \"kubernetes.io/projected/8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9-kube-api-access-r9vwk\") pod \"kube-proxy-djm98\" (UID: \"8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9\") " pod="kube-system/kube-proxy-djm98"
Jun 23 17:58:25 minikube kubelet[2969]: E0623 17:58:25.687797    2969 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Jun 23 17:58:25 minikube kubelet[2969]: E0623 17:58:25.687841    2969 projected.go:192] Error preparing data for projected volume kube-api-access-r9vwk for pod kube-system/kube-proxy-djm98: configmap "kube-root-ca.crt" not found
Jun 23 17:58:25 minikube kubelet[2969]: E0623 17:58:25.687935    2969 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9-kube-api-access-r9vwk podName:8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9 nodeName:}" failed. No retries permitted until 2022-06-23 17:58:26.187903028 +0000 UTC m=+69.158775596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r9vwk" (UniqueName: "kubernetes.io/projected/8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9-kube-api-access-r9vwk") pod "kube-proxy-djm98" (UID: "8ce6caec-d15f-4cd5-a0f2-d2accfb9aca9") : configmap "kube-root-ca.crt" not found
Jun 23 17:58:25 minikube kubelet[2969]: I0623 17:58:25.894308    2969 topology_manager.go:200] "Topology Admit Handler"
Jun 23 17:58:25 minikube kubelet[2969]: I0623 17:58:25.899975    2969 topology_manager.go:200] "Topology Admit Handler"
Jun 23 17:58:25 minikube kubelet[2969]: I0623 17:58:25.980800    2969 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp4hj\" (UniqueName: \"kubernetes.io/projected/c916852c-ef4e-4297-88a7-4ccbbae9750f-kube-api-access-mp4hj\") pod \"coredns-6d4b75cb6d-8mwq2\" (UID: \"c916852c-ef4e-4297-88a7-4ccbbae9750f\") " pod="kube-system/coredns-6d4b75cb6d-8mwq2"
Jun 23 17:58:25 minikube kubelet[2969]: I0623 17:58:25.980898    2969 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae5d196a-e90b-4d8a-832c-5cea821613e7-config-volume\") pod \"coredns-6d4b75cb6d-vsl49\" (UID: \"ae5d196a-e90b-4d8a-832c-5cea821613e7\") " pod="kube-system/coredns-6d4b75cb6d-vsl49"
Jun 23 17:58:25 minikube kubelet[2969]: I0623 17:58:25.981037    2969 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmbk\" (UniqueName: \"kubernetes.io/projected/ae5d196a-e90b-4d8a-832c-5cea821613e7-kube-api-access-spmbk\") pod \"coredns-6d4b75cb6d-vsl49\" (UID: \"ae5d196a-e90b-4d8a-832c-5cea821613e7\") " pod="kube-system/coredns-6d4b75cb6d-vsl49"
Jun 23 17:58:25 minikube kubelet[2969]: I0623 17:58:25.981071    2969 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c916852c-ef4e-4297-88a7-4ccbbae9750f-config-volume\") pod \"coredns-6d4b75cb6d-8mwq2\" (UID: \"c916852c-ef4e-4297-88a7-4ccbbae9750f\") " pod="kube-system/coredns-6d4b75cb6d-8mwq2"
Jun 23 17:58:26 minikube kubelet[2969]: W0623 17:58:26.301027    2969 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae5d196a_e90b_4d8a_832c_5cea821613e7.slice/crio-615a04fd7f91e4ee43bf003860908c6e57ca6b80570a1c64135bde6b5fb9d77a.scope WatchSource:0}: Error finding container 615a04fd7f91e4ee43bf003860908c6e57ca6b80570a1c64135bde6b5fb9d77a: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc001254000 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:26 minikube kubelet[2969]: W0623 17:58:26.417750    2969 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ce6caec_d15f_4cd5_a0f2_d2accfb9aca9.slice/crio-b80e570ae9ef84426f558c61d9618cbac1da7616ae0594ac53ee78490248549e.scope WatchSource:0}: Error finding container b80e570ae9ef84426f558c61d9618cbac1da7616ae0594ac53ee78490248549e: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000f802a0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:26 minikube kubelet[2969]: W0623 17:58:26.418324    2969 manager.go:1176] Failed to process watch event {EventType:0 Name:/pids/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ce6caec_d15f_4cd5_a0f2_d2accfb9aca9.slice/crio-b80e570ae9ef84426f558c61d9618cbac1da7616ae0594ac53ee78490248549e.scope WatchSource:0}: Error finding container b80e570ae9ef84426f558c61d9618cbac1da7616ae0594ac53ee78490248549e: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0007c9110 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x844e40) %!!(MISSING)s(func() error=0x844f40)}
Jun 23 17:58:27 minikube kubelet[2969]: W0623 17:58:27.210123    2969 container.go:589] Failed to update stats for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9f511b2919261aeb7e6e5dba702e09c.slice/crio-50dd32760921e9917f7f4341e1be2c86c770bff895151d8210938735b4d59b87.scope": unable to determine device info for dir: /var/lib/containers/storage/overlay/a9f8bdd5982e8af3b06018124453dfc6e12f38d0bed19ac0279860b9d51f6bec/diff: stat failed on /var/lib/containers/storage/overlay/a9f8bdd5982e8af3b06018124453dfc6e12f38d0bed19ac0279860b9d51f6bec/diff with error: no such file or directory, continuing to push stats
Jun 23 17:58:42 minikube kubelet[2969]: W0623 17:58:42.019727    2969 container.go:589] Failed to update stats for container "/pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9f511b2919261aeb7e6e5dba702e09c.slice/crio-50dd32760921e9917f7f4341e1be2c86c770bff895151d8210938735b4d59b87.scope": unable to determine device info for dir: /var/lib/containers/storage/overlay/a9f8bdd5982e8af3b06018124453dfc6e12f38d0bed19ac0279860b9d51f6bec/diff: stat failed on /var/lib/containers/storage/overlay/a9f8bdd5982e8af3b06018124453dfc6e12f38d0bed19ac0279860b9d51f6bec/diff with error: no such file or directory, continuing to push stats

@AkihiroSuda
Copy link
Member

AkihiroSuda commented Jun 25, 2022

Please try loading these kernel modules, especially br_netfilter.
https://github.com/rootless-containers/usernetes/blob/master/config/modules-load.d/usernetes.conf

Also please try setting --container-runtime=containerd

@jesperpedersen
Copy link
Author

I'm using podman (root-less) with cri-o 1.22. kind-014+ (including main) works with this setup.

I reverted to ad5c964 which works.

@alias-dev
Copy link
Contributor

Adding to this as I've run into the same issue today. I'm seeing the same error messages on start:

$ minikube start --driver=podman --container-runtime=cri-o
...
❌  Exiting due to GUEST_START: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output:
** stderr **
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

Running minikube update-context gives me a working kube context, but running kubectl describe node/minikube shows an issue with CNI:

NotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?

Running the command shown in the error message via podman exec, the manifests are applied successfully:

$ podman exec minikube /bin/bash -c 'sudo /var/lib/minikube/binaries/v1.24.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml'
clusterrole.rbac.authorization.k8s.io/kindnet created
clusterrolebinding.rbac.authorization.k8s.io/kindnet created
serviceaccount/kindnet created
daemonset.apps/kindnet created

And the node becomes ready:

$ kubectl get node/minikube
NAME       STATUS   ROLES           AGE   VERSION
minikube   Ready    control-plane   11m   v1.24.1

To me it looks as though perhaps the API server takes a little longer to come up than expected, so attempts to apply the CNI manifests fail before it becomes available.

@AkihiroSuda
Copy link
Member

AkihiroSuda commented Jun 30, 2022

Seems regression in fffffaa

--kubernetes-version=v1.23.6 seems to work as a workaround.

A weird thing is that cri-o seems to work with rootless docker driver and rootful podman driver, but only fails with the rootless podman driver.

@AkihiroSuda
Copy link
Member

As a workaround I updated https://minikube.sigs.k8s.io/docs/drivers/podman/ to recommend containerd for Rootless Podman.

minikube config set rootless true
minikube start --driver=podman --container-runtime=containerd

This recommendation can be reverted later after getting the issue properly resolved.

@fdfytr
Copy link

fdfytr commented Jul 25, 2022

As a workaround I updated https://minikube.sigs.k8s.io/docs/drivers/podman/ to recommend containerd for Rootless Podman.

minikube config set rootless true
minikube start --driver=podman --container-runtime=containerd

This recommendation can be reverted later after getting the issue properly resolved.

That did not help in my case - Archlinux with minikube from community rep
got similar error by trying to run rootless with podman driver on containerd

I had noticed Your kernel does not support swap limit capabilities or the cgroup is not mounted. Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. gets everywhere.

Can that be a culprit for the problem?

@fdfytr
Copy link

fdfytr commented Jul 26, 2022

Can that be a culprit for the problem?

Looks like I am talking with myself :) but I followed this solution from podman and was able to run minikube with podman by containerd

it is not an issue with minikube, it is rootless podman

@spowelljr spowelljr removed the triage/needs-information Indicates an issue needs more information in order to work on it. label Aug 10, 2022
@klaases
Copy link
Contributor

klaases commented Oct 10, 2022

Looks like I am talking with myself :) but I followed this solution from podman and was able to run minikube with podman by containerd

it is not an issue with minikube, it is rootless podman

Hi @jesperpedersen – does this work for you as well?

If not, please feel free to re-open the issue by commenting with /reopen. This issue will be closed as additional information was unavailable and some time has passed.

Additional information that may be helpful:

  • Whether the issue occurs with the latest minikube release

  • The exact minikube start command line used

  • Attach the full output of minikube logs, run minikube logs --file=logs.txt to create a log file

Thank you for sharing your experience!

@klaases klaases closed this as completed Oct 10, 2022
@jesperpedersen
Copy link
Author

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Oct 11, 2022
@k8s-ci-robot
Copy link
Contributor

@jesperpedersen: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jesperpedersen
Copy link
Author

😄  minikube v1.27.1 on Fedora 36
    ▪ MINIKUBE_ROOTLESS=true
✨  Using the podman driver based on user configuration
📌  Using rootless Podman driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.25.2 preload ...
    > preloaded-images-k8s-v18-v1...:  406.96 MiB / 406.96 MiB  100.00% 5.57 Mi
    > gcr.io/k8s-minikube/kicbase...:  386.73 MiB / 386.73 MiB  100.00% 2.56 Mi
E1010 23:48:52.245284    9055 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔥  Creating podman container (CPUs=2, Memory=8192MB) ...
🎁  Preparing Kubernetes v1.25.2 on CRI-O 1.24.3 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
💢  initialization failed, will try again: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: 
** stderr ** 
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

** /stderr **: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:

stderr:
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
unable to recognize "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused

    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...

💣  Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: 
** stderr ** 
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole"
Name: "kindnet", Namespace: ""
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles/kindnet": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=55, ErrCode=NO_ERROR, debug=""
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "kindnet", Namespace: ""
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kindnet": dial tcp [::1]:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "kindnet", Namespace: "kube-system"
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet": dial tcp [::1]:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet"
Name: "kindnet", Namespace: "kube-system"
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet": dial tcp [::1]:8443: connect: connection refused

** /stderr **: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:

stderr:
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole"
Name: "kindnet", Namespace: ""
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles/kindnet": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=55, ErrCode=NO_ERROR, debug=""
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "kindnet", Namespace: ""
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kindnet": dial tcp [::1]:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "kindnet", Namespace: "kube-system"
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet": dial tcp [::1]:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet"
Name: "kindnet", Namespace: "kube-system"
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet": dial tcp [::1]:8443: connect: connection refused


╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

❌  Exiting due to GUEST_START: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: 
** stderr ** 
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole"
Name: "kindnet", Namespace: ""
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles/kindnet": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=55, ErrCode=NO_ERROR, debug=""
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "kindnet", Namespace: ""
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kindnet": dial tcp [::1]:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "kindnet", Namespace: "kube-system"
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet": dial tcp [::1]:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet"
Name: "kindnet", Namespace: "kube-system"
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet": dial tcp [::1]:8443: connect: connection refused

** /stderr **: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:

stderr:
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole"
Name: "kindnet", Namespace: ""
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles/kindnet": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=55, ErrCode=NO_ERROR, debug=""
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "kindnet", Namespace: ""
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kindnet": dial tcp [::1]:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "kindnet", Namespace: "kube-system"
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet": dial tcp [::1]:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet"
Name: "kindnet", Namespace: "kube-system"
from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet": dial tcp [::1]:8443: connect: connection refused


╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

@jesperpedersen
Copy link
Author

logs.txt

@jesperpedersen
Copy link
Author

minikube 1.25.x with the rootless patch works

@jesperpedersen
Copy link
Author

Note, that this is with crun, as using containerd is a work-around

@jesperpedersen jesperpedersen changed the title [minikube 1.26.0] Fails upon startup with podman using rootless [minikube 1.26.x/1.27.x] Fails upon startup with podman using rootless Oct 11, 2022
@jesperpedersen jesperpedersen changed the title [minikube 1.26.x/1.27.x] Fails upon startup with podman using rootless [minikube 1.26.x/1.27.x/1.28.x] Fails upon startup with podman using rootless Nov 6, 2022
@rbadagandi
Copy link

I ran into a very similar issue and couldn't help but switch to kvm2. I deleted minikube and started again

minikube v1.28.0 on Redhat 8.6
▪ MINIKUBE_ROOTLESS=true
✨ Using the podman driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
⌛ Another minikube instance is downloading dependencies... \ E1130 12:51:22.072090 99310 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426

🔥 Creating podman container (CPUs=8, Memory=16384MB) ...
🤦 StartHost failed, but will try again: creating host: create: creating: create kic node: create container: podman run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec --memory=16384mb --cpus=8 -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36: exit status 126
stdout:

stderr:
Error: OCI runtime error: runc: runc create failed: unable to start container process: error during container init: error setting cgroup config for procHooks process: cannot set memory limit: container could not join or create cgroup

🔄 Restarting existing podman container for "minikube" ...
😿 Failed to start podman container. Running "minikube delete" may fix it: driver start: start: podman start minikube: exit status 125
stdout:

stderr:
Error: OCI runtime error: unable to start container "f28a2e5da2a7bf84d52b50a2ed790b1483f940d897382d4b196c88aee4b02047": runc: runc create failed: unable to start container process: error during container init: error setting cgroup config for procHooks process: cannot set memory limit: container could not join or create cgroup

❌ Exiting due to GUEST_PROVISION: Failed to start host: driver start: start: podman start minikube: exit status 125
stdout:

stderr:
Error: OCI runtime error: unable to start container "f28a2e5da2a7bf84d52b50a2ed790b1483f940d897382d4b196c88aee4b02047": runc: runc create failed: unable to start container process: error during container init: error setting cgroup config for procHooks process: cannot set memory limit: container could not join or create cgroup

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 28, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 30, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 29, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

9 participants