==> Audit <== |---------|--------------------------------|---------|--------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|--------------------------------|---------|--------|---------|---------------------|---------------------| | start | --addons=ingress | devenv | apatel | v1.33.1 | 28 May 24 11:22 EDT | 28 May 24 11:24 EDT | | | --driver=podman | | | | | | | | --container-runtime=containerd | | | | | | | | --profile devenv | | | | | | |---------|--------------------------------|---------|--------|---------|---------------------|---------------------| ==> Last Start <== Log file created at: 2024/05/28 11:22:51 Running on machine: loungerider Binary: Built with gc go1.22.2 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0528 11:22:51.260142 43656 out.go:291] Setting OutFile to fd 1 ... I0528 11:22:51.260571 43656 out.go:343] isatty.IsTerminal(1) = true I0528 11:22:51.260575 43656 out.go:304] Setting ErrFile to fd 2... I0528 11:22:51.260579 43656 out.go:343] isatty.IsTerminal(2) = true I0528 11:22:51.260716 43656 root.go:338] Updating PATH: /Users/apatel/.minikube/bin I0528 11:22:51.263000 43656 out.go:298] Setting JSON to false I0528 11:22:51.303313 43656 start.go:129] hostinfo: {"hostname":"loungerider.local","uptime":3799532,"bootTime":1713110239,"procs":848,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"03e60f98-90fe-5d02-93ec-23ce9456291f"} W0528 11:22:51.303424 43656 start.go:137] gopshost.Virtualization returned error: not implemented yet I0528 11:22:51.322498 43656 out.go:177] 😄 [devenv] minikube v1.33.1 on Darwin 14.4.1 I0528 11:22:51.383215 43656 out.go:177] ▪ KUBECONFIG=/Users/apatel/.kube/devenv-minikube-config I0528 11:22:51.365069 43656 notify.go:220] Checking for updates... I0528 11:22:51.421135 43656 out.go:177] ▪ MINIKUBE_WANTUPDATENOTIFICATION=false I0528 11:22:51.439628 43656 driver.go:392] Setting default libvirt URI to qemu:///system I0528 11:22:51.621161 43656 podman.go:123] podman version: 5.0.3 I0528 11:22:51.641297 43656 out.go:177] ✨ Using the podman (experimental) driver based on user configuration I0528 11:22:51.659175 43656 start.go:297] selected driver: podman I0528 11:22:51.659185 43656 start.go:901] validating driver "podman" against I0528 11:22:51.659195 43656 start.go:912] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0528 11:22:51.659326 43656 cli_runner.go:164] Run: podman system info --format json I0528 11:22:51.831571 43656 info.go:288] podman info: {Host:{BuildahVersion:1.35.4 CgroupVersion:v2 Conmon:{Package:conmon-2.1.10-1.fc40.x86_64 Path:/usr/bin/conmon Version:conmon version 2.1.10, commit: } Distribution:{Distribution:fedora Version:40} MemFree:3571191808 MemTotal:4096901120 OCIRuntime:{Name:crun Package:crun-1.14.4-1.fc40.x86_64 Path:/usr/bin/crun Version:crun version 1.14.4 commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1 rundir: /run/user/501/crun spec: 1.0.0 +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL} SwapFree:0 SwapTotal:0 Arch:amd64 Cpus:4 Eventlogger:journald Hostname:localhost.localdomain Kernel:6.8.8-300.fc40.x86_64 Os:linux Security:{Rootless:true} Uptime:0h 0m 10.00s} Registries:{Search:[docker.io]} Store:{ConfigFile:/var/home/core/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/home/core/.local/share/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:0} RunRoot:/run/user/501/containers VolumePath:/var/home/core/.local/share/containers/storage/volumes}} I0528 11:22:51.831724 43656 start_flags.go:310] no existing cluster config was found, will generate one from the flags I0528 11:22:51.831908 43656 start_flags.go:393] Using suggested 3859MB memory alloc based on sys=65536MB, container=3907MB I0528 11:22:51.832044 43656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true] I0528 11:22:51.851800 43656 out.go:177] 📌 Using rootless Podman driver I0528 11:22:51.869617 43656 cni.go:84] Creating CNI manager for "" I0528 11:22:51.869633 43656 cni.go:143] "podman" driver + "containerd" runtime found, recommending kindnet I0528 11:22:51.869647 43656 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni I0528 11:22:51.869701 43656 start.go:340] cluster config: {Name:devenv KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3859 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:devenv Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:KubeletInUserNamespace=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0528 11:22:51.889415 43656 out.go:177] 👍 Starting "devenv" primary control-plane node in "devenv" cluster I0528 11:22:51.926481 43656 cache.go:121] Beginning downloading kic base image for podman with containerd I0528 11:22:51.945390 43656 out.go:177] 🚜 Pulling base image v0.0.44 ... I0528 11:22:51.983589 43656 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd I0528 11:22:51.983646 43656 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e to local cache I0528 11:22:51.983675 43656 preload.go:147] Found local preload: /Users/apatel/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 I0528 11:22:51.983729 43656 cache.go:56] Caching tarball of preloaded images I0528 11:22:51.984222 43656 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local cache directory I0528 11:22:51.984245 43656 preload.go:173] Found /Users/apatel/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download I0528 11:22:51.984248 43656 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local cache directory, skipping pull I0528 11:22:51.984256 43656 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e exists in cache, skipping pull I0528 11:22:51.984289 43656 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on containerd I0528 11:22:51.984294 43656 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e as a tarball I0528 11:22:51.984785 43656 profile.go:143] Saving config to /Users/apatel/.minikube/profiles/devenv/config.json ... I0528 11:22:51.984823 43656 lock.go:35] WriteFile acquiring /Users/apatel/.minikube/profiles/devenv/config.json: {Name:mk29247dea4e42a4a095dc5952bfd9f6ea3331b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} E0528 11:22:51.985491 43656 cache.go:189] Error downloading kic artifacts: not yet implemented, see issue #8426 I0528 11:22:51.985515 43656 cache.go:194] Successfully downloaded all kic artifacts I0528 11:22:51.985577 43656 start.go:360] acquireMachinesLock for devenv: {Name:mk34bcdc8232a0c88542f291adf06b8202579dd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0528 11:22:51.985849 43656 start.go:364] duration metric: took 255.26µs to acquireMachinesLock for "devenv" I0528 11:22:51.985895 43656 start.go:93] Provisioning new machine with config: &{Name:devenv KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3859 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:devenv Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:KubeletInUserNamespace=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true} I0528 11:22:51.985986 43656 start.go:125] createHost starting for "" (driver="podman") I0528 11:22:52.005469 43656 out.go:204] 🔥 Creating podman container (CPUs=2, Memory=3859MB) ... I0528 11:22:52.005890 43656 start.go:159] libmachine.API.Create for "devenv" (driver="podman") I0528 11:22:52.005936 43656 client.go:168] LocalClient.Create starting I0528 11:22:52.006287 43656 main.go:141] libmachine: Reading certificate data from /Users/apatel/.minikube/certs/ca.pem I0528 11:22:52.006510 43656 main.go:141] libmachine: Decoding PEM data... I0528 11:22:52.006539 43656 main.go:141] libmachine: Parsing certificate... I0528 11:22:52.006645 43656 main.go:141] libmachine: Reading certificate data from /Users/apatel/.minikube/certs/cert.pem I0528 11:22:52.006832 43656 main.go:141] libmachine: Decoding PEM data... I0528 11:22:52.006854 43656 main.go:141] libmachine: Parsing certificate... I0528 11:22:52.007735 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:22:52.187963 43656 cli_runner.go:164] Run: podman network inspect devenv --format "{{range .}}{{if eq .Driver "bridge"}}{{(index .Subnets 0).Subnet}},{{(index .Subnets 0).Gateway}}{{end}}{{end}}" W0528 11:22:52.302959 43656 cli_runner.go:211] podman network inspect devenv --format "{{range .}}{{if eq .Driver "bridge"}}{{(index .Subnets 0).Subnet}},{{(index .Subnets 0).Gateway}}{{end}}{{end}}" returned with exit code 125 I0528 11:22:52.303062 43656 network_create.go:281] running [podman network inspect devenv] to gather additional debugging logs... I0528 11:22:52.303074 43656 cli_runner.go:164] Run: podman network inspect devenv W0528 11:22:52.418012 43656 cli_runner.go:211] podman network inspect devenv returned with exit code 125 I0528 11:22:52.418081 43656 network_create.go:284] error running [podman network inspect devenv]: podman network inspect devenv: exit status 125 stdout: [] stderr: Error: network devenv: network not found I0528 11:22:52.418091 43656 network_create.go:286] output of [podman network inspect devenv]: -- stdout -- [] -- /stdout -- ** stderr ** Error: network devenv: network not found ** /stderr ** I0528 11:22:52.418193 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:22:52.602816 43656 cli_runner.go:164] Run: podman network inspect podman --format "{{range .}}{{if eq .Driver "bridge"}}{{(index .Subnets 0).Subnet}},{{(index .Subnets 0).Gateway}}{{end}}{{end}}" I0528 11:22:52.727875 43656 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000cb8c20} I0528 11:22:52.727917 43656 network_create.go:124] attempt to create podman network devenv 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0 ... I0528 11:22:52.728000 43656 cli_runner.go:164] Run: podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=devenv devenv I0528 11:22:52.840002 43656 network_create.go:108] podman network devenv 192.168.49.0/24 created I0528 11:22:52.840033 43656 kic.go:121] calculated static IP "192.168.49.2" for the "devenv" container I0528 11:22:52.840134 43656 cli_runner.go:164] Run: podman ps -a --format {{.Names}} I0528 11:22:52.953165 43656 cli_runner.go:164] Run: podman volume create devenv --label name.minikube.sigs.k8s.io=devenv --label created_by.minikube.sigs.k8s.io=true I0528 11:22:53.068149 43656 oci.go:103] Successfully created a podman volume devenv I0528 11:22:53.068255 43656 cli_runner.go:164] Run: podman run --rm --name devenv-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=devenv --entrypoint /usr/bin/test -v devenv:/var gcr.io/k8s-minikube/kicbase:v0.0.44 -d /var/lib I0528 11:23:27.750400 43656 cli_runner.go:217] Completed: podman run --rm --name devenv-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=devenv --entrypoint /usr/bin/test -v devenv:/var gcr.io/k8s-minikube/kicbase:v0.0.44 -d /var/lib: (34.681018237s) I0528 11:23:27.750464 43656 oci.go:107] Successfully prepared a podman volume devenv I0528 11:23:27.750499 43656 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd I0528 11:23:27.750549 43656 kic.go:194] Starting extracting preloaded images to volume ... I0528 11:23:27.750713 43656 cli_runner.go:164] Run: podman run --rm --entrypoint /usr/bin/tar -v /Users/apatel/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v devenv:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.44 -I lz4 -xf /preloaded.tar -C /extractDir I0528 11:23:31.482674 43656 cli_runner.go:217] Completed: podman run --rm --entrypoint /usr/bin/tar -v /Users/apatel/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v devenv:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.44 -I lz4 -xf /preloaded.tar -C /extractDir: (3.731805398s) I0528 11:23:31.482697 43656 kic.go:203] duration metric: took 3.732050096s to extract preloaded images to volume ... I0528 11:23:31.482815 43656 cli_runner.go:164] Run: podman info --format "'{{json .SecurityOptions}}'" W0528 11:23:31.650540 43656 cli_runner.go:211] podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125 I0528 11:23:31.650838 43656 cli_runner.go:164] Run: podman run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname devenv --name devenv --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=devenv --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=devenv --network devenv --ip 192.168.49.2 --volume devenv:/var:exec --memory-swap=3859mb --memory=3859mb --cpus=2 -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.44 I0528 11:23:32.039171 43656 cli_runner.go:164] Run: podman container inspect devenv --format={{.State.Running}} I0528 11:23:32.158543 43656 cli_runner.go:164] Run: podman container inspect devenv --format={{.State.Status}} I0528 11:23:32.276090 43656 cli_runner.go:164] Run: podman exec devenv stat /var/lib/dpkg/alternatives/iptables I0528 11:23:32.699414 43656 oci.go:144] the created container "devenv" has a running status. I0528 11:23:32.699454 43656 kic.go:225] Creating ssh key for kic: /Users/apatel/.minikube/machines/devenv/id_rsa... I0528 11:23:33.247134 43656 kic_runner.go:191] podman (temp): /Users/apatel/.minikube/machines/devenv/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0528 11:23:33.253026 43656 kic_runner.go:277] Run: /Users/apatel/GITHUB/loungerider/local-devenv-demo/.devbox/nix/profile/default/bin/podman exec -i devenv tee /home/docker/.ssh/authorized_keys I0528 11:23:33.444531 43656 cli_runner.go:164] Run: podman container inspect devenv --format={{.State.Status}} I0528 11:23:33.560212 43656 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0528 11:23:33.560235 43656 kic_runner.go:114] Args: [podman exec --privileged devenv chown docker:docker /home/docker/.ssh/authorized_keys] I0528 11:23:33.736505 43656 cli_runner.go:164] Run: podman container inspect devenv --format={{.State.Status}} I0528 11:23:33.851473 43656 machine.go:94] provisionDockerMachine start ... I0528 11:23:33.851584 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:34.016613 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" devenv I0528 11:23:34.138126 43656 main.go:141] libmachine: Using SSH client type: native I0528 11:23:34.138516 43656 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x10f436c60] 0x10f4399c0 [] 0s} 127.0.0.1 35707 } I0528 11:23:34.138527 43656 main.go:141] libmachine: About to run SSH command: hostname I0528 11:23:34.275575 43656 main.go:141] libmachine: SSH cmd err, output: : devenv I0528 11:23:34.275603 43656 ubuntu.go:169] provisioning hostname "devenv" I0528 11:23:34.275710 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:34.445329 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" devenv I0528 11:23:34.565905 43656 main.go:141] libmachine: Using SSH client type: native I0528 11:23:34.566199 43656 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x10f436c60] 0x10f4399c0 [] 0s} 127.0.0.1 35707 } I0528 11:23:34.566205 43656 main.go:141] libmachine: About to run SSH command: sudo hostname devenv && echo "devenv" | sudo tee /etc/hostname I0528 11:23:34.719137 43656 main.go:141] libmachine: SSH cmd err, output: : devenv I0528 11:23:34.719225 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:34.889657 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" devenv I0528 11:23:35.004055 43656 main.go:141] libmachine: Using SSH client type: native I0528 11:23:35.004341 43656 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x10f436c60] 0x10f4399c0 [] 0s} 127.0.0.1 35707 } I0528 11:23:35.004362 43656 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sdevenv' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 devenv/g' /etc/hosts; else echo '127.0.1.1 devenv' | sudo tee -a /etc/hosts; fi fi I0528 11:23:35.142950 43656 main.go:141] libmachine: SSH cmd err, output: : I0528 11:23:35.142978 43656 ubuntu.go:175] set auth options {CertDir:/Users/apatel/.minikube CaCertPath:/Users/apatel/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/apatel/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/apatel/.minikube/machines/server.pem ServerKeyPath:/Users/apatel/.minikube/machines/server-key.pem ClientKeyPath:/Users/apatel/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/apatel/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/apatel/.minikube} I0528 11:23:35.143014 43656 ubuntu.go:177] setting up certificates I0528 11:23:35.143033 43656 provision.go:84] configureAuth start I0528 11:23:35.143153 43656 cli_runner.go:164] Run: podman container inspect -f {{.NetworkSettings.IPAddress}} devenv I0528 11:23:35.260216 43656 cli_runner.go:164] Run: podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" devenv I0528 11:23:35.377854 43656 provision.go:143] copyHostCerts I0528 11:23:35.378064 43656 exec_runner.go:144] found /Users/apatel/.minikube/ca.pem, removing ... I0528 11:23:35.378073 43656 exec_runner.go:203] rm: /Users/apatel/.minikube/ca.pem I0528 11:23:35.378274 43656 exec_runner.go:151] cp: /Users/apatel/.minikube/certs/ca.pem --> /Users/apatel/.minikube/ca.pem (1078 bytes) I0528 11:23:35.378612 43656 exec_runner.go:144] found /Users/apatel/.minikube/cert.pem, removing ... I0528 11:23:35.378617 43656 exec_runner.go:203] rm: /Users/apatel/.minikube/cert.pem I0528 11:23:35.378757 43656 exec_runner.go:151] cp: /Users/apatel/.minikube/certs/cert.pem --> /Users/apatel/.minikube/cert.pem (1119 bytes) I0528 11:23:35.379063 43656 exec_runner.go:144] found /Users/apatel/.minikube/key.pem, removing ... I0528 11:23:35.379067 43656 exec_runner.go:203] rm: /Users/apatel/.minikube/key.pem I0528 11:23:35.379220 43656 exec_runner.go:151] cp: /Users/apatel/.minikube/certs/key.pem --> /Users/apatel/.minikube/key.pem (1675 bytes) I0528 11:23:35.379448 43656 provision.go:117] generating server cert: /Users/apatel/.minikube/machines/server.pem ca-key=/Users/apatel/.minikube/certs/ca.pem private-key=/Users/apatel/.minikube/certs/ca-key.pem org=apatel.devenv san=[127.0.0.1 192.168.49.2 devenv localhost minikube] I0528 11:23:35.525829 43656 provision.go:177] copyRemoteCerts I0528 11:23:35.526038 43656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0528 11:23:35.526074 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:35.698962 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" devenv I0528 11:23:35.809840 43656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35707 SSHKeyPath:/Users/apatel/.minikube/machines/devenv/id_rsa Username:docker} I0528 11:23:35.912412 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0528 11:23:35.941753 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/machines/server.pem --> /etc/docker/server.pem (1188 bytes) I0528 11:23:35.969547 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0528 11:23:35.996780 43656 provision.go:87] duration metric: took 853.705323ms to configureAuth I0528 11:23:35.996791 43656 ubuntu.go:193] setting minikube options for container-runtime I0528 11:23:35.997049 43656 config.go:182] Loaded profile config "devenv": Driver=podman, ContainerRuntime=containerd, KubernetesVersion=v1.30.0 I0528 11:23:35.997054 43656 machine.go:97] duration metric: took 2.145502566s to provisionDockerMachine I0528 11:23:35.997059 43656 client.go:171] duration metric: took 43.989798925s to LocalClient.Create I0528 11:23:35.997080 43656 start.go:167] duration metric: took 43.989874428s to libmachine.API.Create "devenv" I0528 11:23:35.997088 43656 start.go:293] postStartSetup for "devenv" (driver="podman") I0528 11:23:35.997094 43656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0528 11:23:35.997324 43656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0528 11:23:35.997370 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:36.172050 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" devenv I0528 11:23:36.291822 43656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35707 SSHKeyPath:/Users/apatel/.minikube/machines/devenv/id_rsa Username:docker} I0528 11:23:36.391781 43656 ssh_runner.go:195] Run: cat /etc/os-release I0528 11:23:36.396033 43656 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0528 11:23:36.396058 43656 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0528 11:23:36.396064 43656 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0528 11:23:36.396068 43656 info.go:137] Remote host: Ubuntu 22.04.4 LTS I0528 11:23:36.396076 43656 filesync.go:126] Scanning /Users/apatel/.minikube/addons for local assets ... I0528 11:23:36.396284 43656 filesync.go:126] Scanning /Users/apatel/.minikube/files for local assets ... I0528 11:23:36.396426 43656 start.go:296] duration metric: took 399.321236ms for postStartSetup I0528 11:23:36.397150 43656 cli_runner.go:164] Run: podman container inspect -f {{.NetworkSettings.IPAddress}} devenv I0528 11:23:36.514119 43656 cli_runner.go:164] Run: podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" devenv I0528 11:23:36.628178 43656 profile.go:143] Saving config to /Users/apatel/.minikube/profiles/devenv/config.json ... I0528 11:23:36.629230 43656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0528 11:23:36.629273 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:36.807087 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" devenv I0528 11:23:36.927107 43656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35707 SSHKeyPath:/Users/apatel/.minikube/machines/devenv/id_rsa Username:docker} I0528 11:23:37.022652 43656 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0528 11:23:37.028578 43656 start.go:128] duration metric: took 45.041225597s to createHost I0528 11:23:37.028593 43656 start.go:83] releasing machines lock for "devenv", held for 45.041382333s I0528 11:23:37.028701 43656 cli_runner.go:164] Run: podman container inspect -f {{.NetworkSettings.IPAddress}} devenv I0528 11:23:37.144457 43656 cli_runner.go:164] Run: podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" devenv I0528 11:23:37.266294 43656 ssh_runner.go:195] Run: cat /version.json I0528 11:23:37.266362 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:37.266428 43656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0528 11:23:37.266500 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:37.448734 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" devenv I0528 11:23:37.448734 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" devenv I0528 11:23:37.579507 43656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35707 SSHKeyPath:/Users/apatel/.minikube/machines/devenv/id_rsa Username:docker} I0528 11:23:37.581475 43656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35707 SSHKeyPath:/Users/apatel/.minikube/machines/devenv/id_rsa Username:docker} I0528 11:23:37.673159 43656 ssh_runner.go:195] Run: systemctl --version I0528 11:23:38.003103 43656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0528 11:23:38.009260 43656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0528 11:23:38.037913 43656 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0528 11:23:38.038146 43656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0528 11:23:38.066551 43656 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s) I0528 11:23:38.066560 43656 start.go:494] detecting cgroup driver to use... I0528 11:23:38.066573 43656 detect.go:196] detected "cgroupfs" cgroup driver on host os I0528 11:23:38.067975 43656 ssh_runner.go:195] Run: uname -r I0528 11:23:38.072144 43656 ssh_runner.go:195] Run: sh -euc "(echo 6.8.8-300.fc40.x86_64; echo 5.11) | sort -V | head -n1" I0528 11:23:38.077458 43656 ssh_runner.go:195] Run: uname -r I0528 11:23:38.081266 43656 ssh_runner.go:195] Run: sh -euc "(echo 6.8.8-300.fc40.x86_64; echo 5.13) | sort -V | head -n1" I0528 11:23:38.086524 43656 ssh_runner.go:195] Run: sudo systemctl stop -f crio I0528 11:23:38.100489 43656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0528 11:23:38.113563 43656 docker.go:217] disabling cri-docker service (if available) ... I0528 11:23:38.113970 43656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket I0528 11:23:38.128425 43656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service I0528 11:23:38.144421 43656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket I0528 11:23:38.236100 43656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service I0528 11:23:38.336113 43656 docker.go:233] disabling docker service ... I0528 11:23:38.336514 43656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket I0528 11:23:38.360868 43656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service I0528 11:23:38.375112 43656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket I0528 11:23:38.475232 43656 ssh_runner.go:195] Run: sudo systemctl mask docker.service I0528 11:23:38.564726 43656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker I0528 11:23:38.578365 43656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0528 11:23:38.596360 43656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0528 11:23:38.608315 43656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = true|' /etc/containerd/config.toml" I0528 11:23:38.621284 43656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver... I0528 11:23:38.621502 43656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml" I0528 11:23:38.634515 43656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0528 11:23:38.646663 43656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0528 11:23:38.659391 43656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0528 11:23:38.671386 43656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0528 11:23:38.682490 43656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0528 11:23:38.694212 43656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml" I0528 11:23:38.706031 43656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml" I0528 11:23:38.718035 43656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0528 11:23:38.728168 43656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255 stdout: stderr: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory I0528 11:23:38.728412 43656 ssh_runner.go:195] Run: sudo modprobe br_netfilter W0528 11:23:38.740524 43656 crio.go:169] "sudo sysctl net.bridge.bridge-nf-call-iptables" failed, which may be ok: sudo modprobe br_netfilter: Process exited with status 1 stdout: stderr: modprobe: ERROR: could not insert 'br_netfilter': Operation not permitted I0528 11:23:38.740795 43656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0528 11:23:38.751319 43656 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0528 11:23:38.848759 43656 ssh_runner.go:195] Run: sudo systemctl restart containerd I0528 11:23:38.962733 43656 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock I0528 11:23:38.962812 43656 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock I0528 11:23:38.967563 43656 start.go:562] Will wait 60s for crictl version I0528 11:23:38.967781 43656 ssh_runner.go:195] Run: which crictl I0528 11:23:38.971786 43656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0528 11:23:39.010140 43656 start.go:578] Version: 0.1.0 RuntimeName: containerd RuntimeVersion: 1.6.31 RuntimeApiVersion: v1 I0528 11:23:39.010500 43656 ssh_runner.go:195] Run: containerd --version I0528 11:23:39.037156 43656 ssh_runner.go:195] Run: containerd --version I0528 11:23:39.103603 43656 out.go:177] 📦 Preparing Kubernetes v1.30.0 on containerd 1.6.31 ... I0528 11:23:39.124356 43656 ssh_runner.go:195] Run: grep fe80::1 host.minikube.internal$ /etc/hosts I0528 11:23:39.131034 43656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "fe80::1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0528 11:23:39.144855 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:39.319782 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" devenv I0528 11:23:39.438638 43656 kubeadm.go:877] updating cluster {Name:devenv KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3859 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:devenv Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:KubeletInUserNamespace=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ... I0528 11:23:39.438757 43656 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd I0528 11:23:39.439000 43656 ssh_runner.go:195] Run: sudo crictl images --output json I0528 11:23:39.475798 43656 containerd.go:627] all images are preloaded for containerd runtime. I0528 11:23:39.475809 43656 containerd.go:534] Images already preloaded, skipping extraction I0528 11:23:39.476043 43656 ssh_runner.go:195] Run: sudo crictl images --output json I0528 11:23:39.511661 43656 containerd.go:627] all images are preloaded for containerd runtime. I0528 11:23:39.511679 43656 cache_images.go:84] Images are preloaded, skipping loading I0528 11:23:39.511684 43656 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 containerd true true} ... I0528 11:23:39.511784 43656 kubeadm.go:940] kubelet [Unit] Wants=containerd.service [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=KubeletInUserNamespace=true --hostname-override=devenv --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.30.0 ClusterName:devenv Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:KubeletInUserNamespace=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} I0528 11:23:39.512011 43656 ssh_runner.go:195] Run: sudo crictl info I0528 11:23:39.548015 43656 cni.go:84] Creating CNI manager for "" I0528 11:23:39.548021 43656 cni.go:143] "podman" driver + "containerd" runtime found, recommending kindnet I0528 11:23:39.548028 43656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0528 11:23:39.548042 43656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:devenv NodeName:devenv DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:KubeletInUserNamespace=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:KubeletInUserNamespace=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:KubeletInUserNamespace=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true} I0528 11:23:39.548141 43656 kubeadm.go:187] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///run/containerd/containerd.sock name: "devenv" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" feature-gates: "KubeletInUserNamespace=true" controllerManager: extraArgs: allocate-node-cidrs: "true" feature-gates: "KubeletInUserNamespace=true" leader-elect: "false" scheduler: extraArgs: feature-gates: "KubeletInUserNamespace=true" leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.30.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs containerRuntimeEndpoint: unix:///run/containerd/containerd.sock hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0528 11:23:39.548351 43656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0 I0528 11:23:39.558188 43656 binaries.go:44] Found k8s binaries, skipping transfer I0528 11:23:39.558601 43656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0528 11:23:39.569522 43656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes) I0528 11:23:39.588531 43656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0528 11:23:39.608208 43656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2307 bytes) I0528 11:23:39.629556 43656 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0528 11:23:39.634920 43656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0528 11:23:39.649021 43656 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0528 11:23:39.738689 43656 ssh_runner.go:195] Run: sudo systemctl start kubelet I0528 11:23:39.767677 43656 certs.go:68] Setting up /Users/apatel/.minikube/profiles/devenv for IP: 192.168.49.2 I0528 11:23:39.767686 43656 certs.go:194] generating shared ca certs ... I0528 11:23:39.767695 43656 certs.go:226] acquiring lock for ca certs: {Name:mk94f7b87453e5365e17b60df2144481ccb8045b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0528 11:23:39.768189 43656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/apatel/.minikube/ca.key I0528 11:23:39.768449 43656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/apatel/.minikube/proxy-client-ca.key I0528 11:23:39.768465 43656 certs.go:256] generating profile certs ... I0528 11:23:39.768521 43656 certs.go:363] generating signed profile cert for "minikube-user": /Users/apatel/.minikube/profiles/devenv/client.key I0528 11:23:39.768536 43656 crypto.go:68] Generating cert /Users/apatel/.minikube/profiles/devenv/client.crt with IP's: [] I0528 11:23:39.922332 43656 crypto.go:156] Writing cert to /Users/apatel/.minikube/profiles/devenv/client.crt ... I0528 11:23:39.922342 43656 lock.go:35] WriteFile acquiring /Users/apatel/.minikube/profiles/devenv/client.crt: {Name:mkf41716a5b406752a05b4fcd400e4bda795bd82 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0528 11:23:39.922775 43656 crypto.go:164] Writing key to /Users/apatel/.minikube/profiles/devenv/client.key ... I0528 11:23:39.922782 43656 lock.go:35] WriteFile acquiring /Users/apatel/.minikube/profiles/devenv/client.key: {Name:mk5aaedbf1ba5ad2d4766855c329e9fedb3774a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0528 11:23:39.923186 43656 certs.go:363] generating signed profile cert for "minikube": /Users/apatel/.minikube/profiles/devenv/apiserver.key.24422fc6 I0528 11:23:39.923201 43656 crypto.go:68] Generating cert /Users/apatel/.minikube/profiles/devenv/apiserver.crt.24422fc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2] I0528 11:23:39.974110 43656 crypto.go:156] Writing cert to /Users/apatel/.minikube/profiles/devenv/apiserver.crt.24422fc6 ... I0528 11:23:39.974118 43656 lock.go:35] WriteFile acquiring /Users/apatel/.minikube/profiles/devenv/apiserver.crt.24422fc6: {Name:mk10daf6a95904c56cce5770c53174d17f387038 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0528 11:23:39.974415 43656 crypto.go:164] Writing key to /Users/apatel/.minikube/profiles/devenv/apiserver.key.24422fc6 ... I0528 11:23:39.974420 43656 lock.go:35] WriteFile acquiring /Users/apatel/.minikube/profiles/devenv/apiserver.key.24422fc6: {Name:mk7cc28586887878a5799a43fa0c64dbefa3cc78 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0528 11:23:39.974765 43656 certs.go:381] copying /Users/apatel/.minikube/profiles/devenv/apiserver.crt.24422fc6 -> /Users/apatel/.minikube/profiles/devenv/apiserver.crt I0528 11:23:39.977374 43656 certs.go:385] copying /Users/apatel/.minikube/profiles/devenv/apiserver.key.24422fc6 -> /Users/apatel/.minikube/profiles/devenv/apiserver.key I0528 11:23:39.977764 43656 certs.go:363] generating signed profile cert for "aggregator": /Users/apatel/.minikube/profiles/devenv/proxy-client.key I0528 11:23:39.977780 43656 crypto.go:68] Generating cert /Users/apatel/.minikube/profiles/devenv/proxy-client.crt with IP's: [] I0528 11:23:40.136787 43656 crypto.go:156] Writing cert to /Users/apatel/.minikube/profiles/devenv/proxy-client.crt ... I0528 11:23:40.136803 43656 lock.go:35] WriteFile acquiring /Users/apatel/.minikube/profiles/devenv/proxy-client.crt: {Name:mk628ac3dd1b8a2eb234bec018ec9c37dc6531eb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0528 11:23:40.137217 43656 crypto.go:164] Writing key to /Users/apatel/.minikube/profiles/devenv/proxy-client.key ... I0528 11:23:40.137223 43656 lock.go:35] WriteFile acquiring /Users/apatel/.minikube/profiles/devenv/proxy-client.key: {Name:mk1f59a6c4914ab4d877a9626d36b856dcc73ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0528 11:23:40.138558 43656 certs.go:484] found cert: /Users/apatel/.minikube/certs/ca-key.pem (1679 bytes) I0528 11:23:40.138662 43656 certs.go:484] found cert: /Users/apatel/.minikube/certs/ca.pem (1078 bytes) I0528 11:23:40.138754 43656 certs.go:484] found cert: /Users/apatel/.minikube/certs/cert.pem (1119 bytes) I0528 11:23:40.138835 43656 certs.go:484] found cert: /Users/apatel/.minikube/certs/key.pem (1675 bytes) I0528 11:23:40.139552 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0528 11:23:40.169052 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0528 11:23:40.197014 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0528 11:23:40.223864 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0528 11:23:40.251027 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/profiles/devenv/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes) I0528 11:23:40.278021 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/profiles/devenv/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0528 11:23:40.305064 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/profiles/devenv/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0528 11:23:40.330636 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/profiles/devenv/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0528 11:23:40.357512 43656 ssh_runner.go:362] scp /Users/apatel/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0528 11:23:40.382950 43656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0528 11:23:40.402276 43656 ssh_runner.go:195] Run: openssl version I0528 11:23:40.409168 43656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0528 11:23:40.419796 43656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0528 11:23:40.424569 43656 certs.go:528] hashing: -rw-r--r--. 1 root root 1111 Apr 19 19:43 /usr/share/ca-certificates/minikubeCA.pem I0528 11:23:40.424787 43656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0528 11:23:40.432293 43656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0528 11:23:40.443007 43656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt I0528 11:23:40.447897 43656 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1 stdout: stderr: stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory I0528 11:23:40.448116 43656 kubeadm.go:391] StartCluster: {Name:devenv KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3859 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:devenv Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:KubeletInUserNamespace=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0528 11:23:40.448192 43656 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]} I0528 11:23:40.448395 43656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0528 11:23:40.485668 43656 cri.go:89] found id: "" I0528 11:23:40.485886 43656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0528 11:23:40.496049 43656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0528 11:23:40.506024 43656 kubeadm.go:213] ignoring SystemVerification for kubeadm because of podman driver I0528 11:23:40.506376 43656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0528 11:23:40.516981 43656 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0528 11:23:40.517009 43656 kubeadm.go:156] found existing configuration files: I0528 11:23:40.517426 43656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0528 11:23:40.527741 43656 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/admin.conf: No such file or directory I0528 11:23:40.528072 43656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf I0528 11:23:40.540765 43656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0528 11:23:40.553624 43656 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/kubelet.conf: No such file or directory I0528 11:23:40.554013 43656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf I0528 11:23:40.565670 43656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0528 11:23:40.577625 43656 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/controller-manager.conf: No such file or directory I0528 11:23:40.578034 43656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0528 11:23:40.588186 43656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0528 11:23:40.598674 43656 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/scheduler.conf: No such file or directory I0528 11:23:40.598995 43656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0528 11:23:40.608246 43656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0528 11:23:40.649085 43656 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0 I0528 11:23:40.649146 43656 kubeadm.go:309] [preflight] Running pre-flight checks I0528 11:23:40.691150 43656 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification: I0528 11:23:40.691240 43656 kubeadm.go:309] KERNEL_VERSION: 6.8.8-300.fc40.x86_64 I0528 11:23:40.691304 43656 kubeadm.go:309] CONFIG_NAMESPACES: enabled I0528 11:23:40.691362 43656 kubeadm.go:309] CONFIG_NET_NS: enabled I0528 11:23:40.691441 43656 kubeadm.go:309] CONFIG_PID_NS: enabled I0528 11:23:40.691503 43656 kubeadm.go:309] CONFIG_IPC_NS: enabled I0528 11:23:40.691548 43656 kubeadm.go:309] CONFIG_UTS_NS: enabled I0528 11:23:40.691624 43656 kubeadm.go:309] CONFIG_CGROUPS: enabled I0528 11:23:40.691686 43656 kubeadm.go:309] CONFIG_CGROUP_CPUACCT: enabled I0528 11:23:40.691743 43656 kubeadm.go:309] CONFIG_CGROUP_DEVICE: enabled I0528 11:23:40.691808 43656 kubeadm.go:309] CONFIG_CGROUP_FREEZER: enabled I0528 11:23:40.691891 43656 kubeadm.go:309] CONFIG_CGROUP_PIDS: enabled I0528 11:23:40.692012 43656 kubeadm.go:309] CONFIG_CGROUP_SCHED: enabled I0528 11:23:40.692095 43656 kubeadm.go:309] CONFIG_CPUSETS: enabled I0528 11:23:40.692155 43656 kubeadm.go:309] CONFIG_MEMCG: enabled I0528 11:23:40.692190 43656 kubeadm.go:309] CONFIG_INET: enabled I0528 11:23:40.692251 43656 kubeadm.go:309] CONFIG_EXT4_FS: enabled I0528 11:23:40.692288 43656 kubeadm.go:309] CONFIG_PROC_FS: enabled I0528 11:23:40.692343 43656 kubeadm.go:309] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module) I0528 11:23:40.692398 43656 kubeadm.go:309] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module) I0528 11:23:40.692433 43656 kubeadm.go:309] CONFIG_FAIR_GROUP_SCHED: enabled I0528 11:23:40.692496 43656 kubeadm.go:309] CONFIG_OVERLAY_FS: enabled (as module) I0528 11:23:40.692576 43656 kubeadm.go:309] CONFIG_AUFS_FS: not set - Required for aufs. I0528 11:23:40.692627 43656 kubeadm.go:309] CONFIG_BLK_DEV_DM: enabled I0528 11:23:40.692667 43656 kubeadm.go:309] CONFIG_CFS_BANDWIDTH: enabled I0528 11:23:40.692710 43656 kubeadm.go:309] CONFIG_CGROUP_HUGETLB: enabled I0528 11:23:40.692743 43656 kubeadm.go:309] CONFIG_SECCOMP: enabled I0528 11:23:40.692806 43656 kubeadm.go:309] CONFIG_SECCOMP_FILTER: enabled I0528 11:23:40.692863 43656 kubeadm.go:309] OS: Linux I0528 11:23:40.692909 43656 kubeadm.go:309] CGROUPS_CPU: enabled I0528 11:23:40.693015 43656 kubeadm.go:309] CGROUPS_CPUSET: missing I0528 11:23:40.693074 43656 kubeadm.go:309] CGROUPS_DEVICES: enabled I0528 11:23:40.693131 43656 kubeadm.go:309] CGROUPS_FREEZER: enabled I0528 11:23:40.693177 43656 kubeadm.go:309] CGROUPS_MEMORY: enabled I0528 11:23:40.693210 43656 kubeadm.go:309] CGROUPS_PIDS: enabled I0528 11:23:40.693244 43656 kubeadm.go:309] CGROUPS_HUGETLB: missing I0528 11:23:40.693276 43656 kubeadm.go:309] CGROUPS_IO: enabled I0528 11:23:40.750300 43656 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster I0528 11:23:40.750450 43656 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection I0528 11:23:40.750624 43656 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0528 11:23:40.962796 43656 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0528 11:23:41.001519 43656 out.go:204] ▪ Generating certificates and keys ... I0528 11:23:41.001581 43656 kubeadm.go:309] [certs] Using existing ca certificate authority I0528 11:23:41.001677 43656 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk I0528 11:23:41.007486 43656 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key I0528 11:23:41.162862 43656 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key I0528 11:23:41.527019 43656 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key I0528 11:23:41.755318 43656 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key I0528 11:23:42.132200 43656 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key I0528 11:23:42.132364 43656 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [devenv localhost] and IPs [192.168.49.2 127.0.0.1 ::1] I0528 11:23:42.244670 43656 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key I0528 11:23:42.245086 43656 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [devenv localhost] and IPs [192.168.49.2 127.0.0.1 ::1] I0528 11:23:42.414957 43656 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key I0528 11:23:42.578279 43656 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key I0528 11:23:42.701197 43656 kubeadm.go:309] [certs] Generating "sa" key and public key I0528 11:23:42.701363 43656 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0528 11:23:42.810841 43656 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file I0528 11:23:43.150125 43656 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file I0528 11:23:43.241499 43656 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0528 11:23:43.363699 43656 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0528 11:23:43.592247 43656 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0528 11:23:43.592879 43656 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0528 11:23:43.594691 43656 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0528 11:23:43.634108 43656 out.go:204] ▪ Booting up control plane ... I0528 11:23:43.634211 43656 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver" I0528 11:23:43.634286 43656 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0528 11:23:43.634339 43656 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler" I0528 11:23:43.634444 43656 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0528 11:23:43.634523 43656 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0528 11:23:43.634554 43656 kubeadm.go:309] [kubelet-start] Starting the kubelet I0528 11:23:43.713050 43656 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" I0528 11:23:43.713126 43656 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s I0528 11:23:44.716953 43656 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00363406s I0528 11:23:44.717099 43656 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s I0528 11:23:49.219477 43656 kubeadm.go:309] [api-check] The API server is healthy after 4.502458822s I0528 11:23:49.230317 43656 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0528 11:23:49.239130 43656 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0528 11:23:49.253913 43656 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs I0528 11:23:49.254081 43656 kubeadm.go:309] [mark-control-plane] Marking the node devenv as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0528 11:23:49.260424 43656 kubeadm.go:309] [bootstrap-token] Using token: 5r4dir.28rclap5crsa2psc I0528 11:23:49.281554 43656 out.go:204] ▪ Configuring RBAC rules ... I0528 11:23:49.281676 43656 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0528 11:23:49.321637 43656 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0528 11:23:49.327064 43656 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0528 11:23:49.329213 43656 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0528 11:23:49.332612 43656 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0528 11:23:49.336299 43656 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0528 11:23:49.626607 43656 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0528 11:23:50.043216 43656 kubeadm.go:309] [addons] Applied essential addon: CoreDNS I0528 11:23:50.624966 43656 kubeadm.go:309] [addons] Applied essential addon: kube-proxy I0528 11:23:50.625748 43656 kubeadm.go:309] I0528 11:23:50.625799 43656 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully! I0528 11:23:50.625802 43656 kubeadm.go:309] I0528 11:23:50.625882 43656 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user: I0528 11:23:50.625892 43656 kubeadm.go:309] I0528 11:23:50.625915 43656 kubeadm.go:309] mkdir -p $HOME/.kube I0528 11:23:50.625981 43656 kubeadm.go:309] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0528 11:23:50.626045 43656 kubeadm.go:309] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0528 11:23:50.626049 43656 kubeadm.go:309] I0528 11:23:50.626104 43656 kubeadm.go:309] Alternatively, if you are the root user, you can run: I0528 11:23:50.626108 43656 kubeadm.go:309] I0528 11:23:50.626154 43656 kubeadm.go:309] export KUBECONFIG=/etc/kubernetes/admin.conf I0528 11:23:50.626159 43656 kubeadm.go:309] I0528 11:23:50.626220 43656 kubeadm.go:309] You should now deploy a pod network to the cluster. I0528 11:23:50.626300 43656 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0528 11:23:50.626362 43656 kubeadm.go:309] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0528 11:23:50.626365 43656 kubeadm.go:309] I0528 11:23:50.626450 43656 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities I0528 11:23:50.626531 43656 kubeadm.go:309] and service account keys on each node and then running the following as root: I0528 11:23:50.626534 43656 kubeadm.go:309] I0528 11:23:50.626616 43656 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5r4dir.28rclap5crsa2psc \ I0528 11:23:50.626714 43656 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:1b9c8b39d1ddf0f374c4f48c14354536383673d7822b239f0576d09b452c9393 \ I0528 11:23:50.626736 43656 kubeadm.go:309] --control-plane I0528 11:23:50.626738 43656 kubeadm.go:309] I0528 11:23:50.626813 43656 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root: I0528 11:23:50.626818 43656 kubeadm.go:309] I0528 11:23:50.626892 43656 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5r4dir.28rclap5crsa2psc \ I0528 11:23:50.627000 43656 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:1b9c8b39d1ddf0f374c4f48c14354536383673d7822b239f0576d09b452c9393 I0528 11:23:50.628317 43656 kubeadm.go:309] [WARNING SystemVerification]: missing optional cgroups: hugetlb I0528 11:23:50.628384 43656 kubeadm.go:309] [WARNING SystemVerification]: missing required cgroups: cpuset I0528 11:23:50.628485 43656 kubeadm.go:309] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0528 11:23:50.628497 43656 cni.go:84] Creating CNI manager for "" I0528 11:23:50.628502 43656 cni.go:143] "podman" driver + "containerd" runtime found, recommending kindnet I0528 11:23:50.699009 43656 out.go:177] 🔗 Configuring CNI (Container Networking Interface) ... I0528 11:23:50.718070 43656 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap I0528 11:23:50.724110 43656 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ... I0528 11:23:50.724116 43656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes) I0528 11:23:50.744885 43656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0528 11:23:50.970882 43656 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0528 11:23:50.971132 43656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0528 11:23:50.971131 43656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes devenv minikube.k8s.io/updated_at=2024_05_28T11_23_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=v1.33.1 minikube.k8s.io/name=devenv minikube.k8s.io/primary=true I0528 11:23:50.981102 43656 ops.go:34] apiserver oom_adj: 3 I0528 11:23:50.981108 43656 ops.go:39] adjusting apiserver oom_adj to -10 I0528 11:23:50.981114 43656 ssh_runner.go:195] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj" I0528 11:23:51.064759 43656 kubeadm.go:1107] duration metric: took 93.858883ms to wait for elevateKubeSystemPrivileges W0528 11:23:51.064787 43656 kubeadm.go:278] unable to adjust resource limits for primary control-plane node: oom_adj adjust: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1 stdout: -10 stderr: tee: /proc/1322/oom_adj: Permission denied W0528 11:23:51.064810 43656 kubeadm.go:286] apiserver tunnel failed: apiserver port not set I0528 11:23:51.064818 43656 kubeadm.go:393] duration metric: took 10.61638823s to StartCluster I0528 11:23:51.064831 43656 settings.go:142] acquiring lock: {Name:mkc371f8850488e45ff9079b799dc8e544b12c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0528 11:23:51.065003 43656 settings.go:150] Updating kubeconfig: /Users/apatel/.kube/devenv-minikube-config I0528 11:23:51.065436 43656 lock.go:35] WriteFile acquiring /Users/apatel/.kube/devenv-minikube-config: {Name:mk4d9a0148cc9c238b289bd7536c77772b3a8a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0528 11:23:51.065832 43656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0528 11:23:51.065856 43656 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true} I0528 11:23:51.085443 43656 out.go:177] 🔎 Verifying Kubernetes components... I0528 11:23:51.065875 43656 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] I0528 11:23:51.076511 43656 config.go:182] Loaded profile config "devenv": Driver=podman, ContainerRuntime=containerd, KubernetesVersion=v1.30.0 I0528 11:23:51.138503 43656 addons.go:69] Setting default-storageclass=true in profile "devenv" I0528 11:23:51.138503 43656 addons.go:69] Setting storage-provisioner=true in profile "devenv" I0528 11:23:51.138519 43656 addons.go:69] Setting ingress=true in profile "devenv" I0528 11:23:51.138542 43656 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "devenv" I0528 11:23:51.138544 43656 addons.go:234] Setting addon storage-provisioner=true in "devenv" I0528 11:23:51.138544 43656 addons.go:234] Setting addon ingress=true in "devenv" I0528 11:23:51.138567 43656 host.go:66] Checking if "devenv" exists ... I0528 11:23:51.138567 43656 host.go:66] Checking if "devenv" exists ... I0528 11:23:51.138685 43656 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0528 11:23:51.139511 43656 cli_runner.go:164] Run: podman container inspect devenv --format={{.State.Status}} I0528 11:23:51.139737 43656 cli_runner.go:164] Run: podman container inspect devenv --format={{.State.Status}} I0528 11:23:51.140594 43656 cli_runner.go:164] Run: podman container inspect devenv --format={{.State.Status}} I0528 11:23:51.146804 43656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n fe80::1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0528 11:23:51.323809 43656 ssh_runner.go:195] Run: sudo systemctl start kubelet I0528 11:23:51.403568 43656 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0528 11:23:51.386972 43656 addons.go:234] Setting addon default-storageclass=true in "devenv" I0528 11:23:51.422713 43656 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml I0528 11:23:51.441418 43656 out.go:177] 💡 After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1" I0528 11:23:51.441418 43656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0528 11:23:51.441491 43656 host.go:66] Checking if "devenv" exists ... I0528 11:23:51.441555 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:51.461955 43656 cli_runner.go:164] Run: podman container inspect devenv --format={{.State.Status}} I0528 11:23:51.495810 43656 out.go:177] ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1 I0528 11:23:51.534349 43656 out.go:177] ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1 I0528 11:23:51.572274 43656 out.go:177] ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.10.1 I0528 11:23:51.591628 43656 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml I0528 11:23:51.591637 43656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes) I0528 11:23:51.591728 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:51.599827 43656 start.go:946] {"host.minikube.internal": fe80::1} host record injected into CoreDNS's ConfigMap I0528 11:23:51.599906 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:51.614588 43656 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml I0528 11:23:51.614603 43656 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0528 11:23:51.614671 43656 cli_runner.go:164] Run: podman version --format {{.Version}} I0528 11:23:51.640502 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" devenv I0528 11:23:51.765331 43656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35707 SSHKeyPath:/Users/apatel/.minikube/machines/devenv/id_rsa Username:docker} I0528 11:23:51.794246 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" devenv I0528 11:23:51.798828 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" devenv I0528 11:23:51.813783 43656 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" devenv I0528 11:23:51.884763 43656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0528 11:23:51.919781 43656 api_server.go:52] waiting for apiserver process to appear ... I0528 11:23:51.920012 43656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0528 11:23:51.921181 43656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35707 SSHKeyPath:/Users/apatel/.minikube/machines/devenv/id_rsa Username:docker} I0528 11:23:51.941685 43656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35707 SSHKeyPath:/Users/apatel/.minikube/machines/devenv/id_rsa Username:docker} I0528 11:23:52.043149 43656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml I0528 11:23:52.062428 43656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0528 11:23:52.106779 43656 kapi.go:248] "coredns" deployment in "kube-system" namespace and "devenv" context rescaled to 1 replicas I0528 11:23:52.245606 43656 api_server.go:72] duration metric: took 1.179697366s to wait for apiserver process to appear ... I0528 11:23:52.245614 43656 api_server.go:88] waiting for apiserver healthz status ... I0528 11:23:52.245628 43656 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:36493/healthz ... I0528 11:23:52.254409 43656 api_server.go:279] https://127.0.0.1:36493/healthz returned 200: ok I0528 11:23:52.255757 43656 api_server.go:141] control plane version: v1.30.0 I0528 11:23:52.255767 43656 api_server.go:131] duration metric: took 10.149174ms to wait for apiserver health ... I0528 11:23:52.255775 43656 system_pods.go:43] waiting for kube-system pods to appear ... I0528 11:23:52.261789 43656 system_pods.go:59] 5 kube-system pods found I0528 11:23:52.261804 43656 system_pods.go:61] "etcd-devenv" [45b5663e-6c34-4382-b99c-4b0d93a2d107] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0528 11:23:52.261808 43656 system_pods.go:61] "kube-apiserver-devenv" [56d629ef-57a9-4929-ba8f-181e9b1fbf07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0528 11:23:52.261814 43656 system_pods.go:61] "kube-controller-manager-devenv" [422d3af0-b97a-4702-8906-4d94b41052a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0528 11:23:52.261820 43656 system_pods.go:61] "kube-scheduler-devenv" [a8a5b9dd-4532-4a0f-a2c1-9f1d66d9501d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0528 11:23:52.261845 43656 system_pods.go:61] "storage-provisioner" [99b684db-9b6f-4de6-8ff2-531c4c5d2bdc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.) I0528 11:23:52.261849 43656 system_pods.go:74] duration metric: took 6.071057ms to wait for pod list to return data ... I0528 11:23:52.261855 43656 kubeadm.go:576] duration metric: took 1.195950442s to wait for: map[apiserver:true system_pods:true] I0528 11:23:52.261864 43656 node_conditions.go:102] verifying NodePressure condition ... I0528 11:23:52.264727 43656 node_conditions.go:122] node storage ephemeral capacity is 104266732Ki I0528 11:23:52.264741 43656 node_conditions.go:123] node cpu capacity is 4 I0528 11:23:52.264751 43656 node_conditions.go:105] duration metric: took 2.883482ms to run NodePressure ... I0528 11:23:52.264759 43656 start.go:240] waiting for startup goroutines ... I0528 11:23:52.712260 43656 addons.go:470] Verifying addon ingress=true in "devenv" I0528 11:23:52.732989 43656 out.go:177] 🔎 Verifying ingress addon... I0528 11:23:52.770837 43656 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ... I0528 11:23:52.773352 43656 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx I0528 11:24:02.774532 43656 kapi.go:86] Found 2 Pods for label selector app.kubernetes.io/name=ingress-nginx I0528 11:24:02.774539 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:03.274769 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:03.782865 43656 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx I0528 11:24:03.782872 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:04.278203 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:04.776527 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:05.276829 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:05.776207 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:06.275359 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:06.776292 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:07.275294 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:07.775631 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:08.275315 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:08.778077 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:09.277176 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:09.777744 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:10.276864 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:10.778126 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:11.276818 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:11.778104 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:12.277043 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:12.777323 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:13.277929 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:13.777963 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:14.275415 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:14.776898 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:15.275162 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:15.776195 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:16.276626 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:16.777476 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:17.276877 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:17.775731 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:18.275643 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:18.777666 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:19.276054 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:19.777866 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:20.275322 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:20.776954 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:21.288551 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:21.776225 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:22.278145 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:22.776447 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:23.279335 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:23.776396 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:24.275644 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:24.776566 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:25.276561 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:25.776678 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:26.275560 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:26.775685 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:27.277315 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:27.776650 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:28.276569 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:28.776803 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:29.276602 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:29.775524 43656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [] I0528 11:24:30.276994 43656 kapi.go:107] duration metric: took 37.505032173s to wait for app.kubernetes.io/name=ingress-nginx ... I0528 11:24:30.301451 43656 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass, ingress I0528 11:24:30.340946 43656 addons.go:505] duration metric: took 39.273883576s for enable addons: enabled=[storage-provisioner default-storageclass ingress] I0528 11:24:30.340970 43656 start.go:245] waiting for cluster config update ... I0528 11:24:30.340980 43656 start.go:254] writing updated cluster config ... I0528 11:24:30.342110 43656 ssh_runner.go:195] Run: rm -f paused I0528 11:24:30.402931 43656 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0) I0528 11:24:30.421612 43656 out.go:177] 🏄 Done! kubectl is now configured to use "devenv" cluster and "default" namespace by default ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD 21f2d7a85340b ee54966f3891d 38 seconds ago Running controller 0 5b0a999ad645e ingress-nginx-controller-768f948f8f-gd46m 5ab8780430067 684c5ea3b61b2 49 seconds ago Exited patch 0 45359f27ba199 ingress-nginx-admission-patch-hqbms a440059d316ac 684c5ea3b61b2 50 seconds ago Exited create 0 fc44d4a2850fb ingress-nginx-admission-create-sz79q 31ad4f3955ce9 cbb01a7bd410d 51 seconds ago Running coredns 0 3a0593097f7a2 coredns-7db6d8ff4d-8twbg fe3cc992b9a11 4950bb10b3f87 About a minute ago Running kindnet-cni 0 0392da8452d95 kindnet-h7kkw a6e769bd785a0 a0bf559e280cf About a minute ago Running kube-proxy 0 2f4ad34680f71 kube-proxy-ctc8b b5dff207488bd 6e38f40d628db About a minute ago Running storage-provisioner 0 188e350cecab7 storage-provisioner e793e67007532 3861cfcd7c04c About a minute ago Running etcd 0 02773dfcb73d4 etcd-devenv 073a94b480fbe c42f13656d0b2 About a minute ago Running kube-apiserver 0 f18ea5314e851 kube-apiserver-devenv 7000737def60e c7aad43836fa5 About a minute ago Running kube-controller-manager 0 f39da081c53a3 kube-controller-manager-devenv 2e80204f6c7fc 259c8277fcbbc About a minute ago Running kube-scheduler 0 8dd1bc3130edf kube-scheduler-devenv ==> containerd <== May 28 15:24:16 devenv containerd[696]: time="2024-05-28T15:24:16.223922460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8twbg,Uid:d76c0160-1409-42cd-bed6-318fb3544376,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a0593097f7a2b5b7a6bf51317b025c62e9a0a4b71db9b78054285c79fc82420\"" May 28 15:24:16 devenv containerd[696]: time="2024-05-28T15:24:16.227941368Z" level=info msg="CreateContainer within sandbox \"3a0593097f7a2b5b7a6bf51317b025c62e9a0a4b71db9b78054285c79fc82420\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 28 15:24:16 devenv containerd[696]: time="2024-05-28T15:24:16.236798967Z" level=info msg="CreateContainer within sandbox \"3a0593097f7a2b5b7a6bf51317b025c62e9a0a4b71db9b78054285c79fc82420\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31ad4f3955ce9ad0d15142365d7429b6a23f091dfa419502f651184dbdcee14e\"" May 28 15:24:16 devenv containerd[696]: time="2024-05-28T15:24:16.237471686Z" level=info msg="StartContainer for \"31ad4f3955ce9ad0d15142365d7429b6a23f091dfa419502f651184dbdcee14e\"" May 28 15:24:16 devenv containerd[696]: time="2024-05-28T15:24:16.297562577Z" level=info msg="StartContainer for \"31ad4f3955ce9ad0d15142365d7429b6a23f091dfa419502f651184dbdcee14e\" returns successfully" May 28 15:24:17 devenv containerd[696]: time="2024-05-28T15:24:17.502610321Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 28 15:24:17 devenv containerd[696]: time="2024-05-28T15:24:17.504144633Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 28 15:24:17 devenv containerd[696]: time="2024-05-28T15:24:17.505524263Z" level=info msg="PullImage \"registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366\" returns image reference \"sha256:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66\"" May 28 15:24:17 devenv containerd[696]: time="2024-05-28T15:24:17.508459799Z" level=info msg="CreateContainer within sandbox \"fc44d4a2850fbf320fb6e99769aed4dd6574b412f8b72b4ee8c9d86263a56a3f\" for container &ContainerMetadata{Name:create,Attempt:0,}" May 28 15:24:17 devenv containerd[696]: time="2024-05-28T15:24:17.517443222Z" level=info msg="CreateContainer within sandbox \"fc44d4a2850fbf320fb6e99769aed4dd6574b412f8b72b4ee8c9d86263a56a3f\" for &ContainerMetadata{Name:create,Attempt:0,} returns container id \"a440059d316ac8eacd4f52edfceb2565b905f99b0b799267b5b1e7e088d55215\"" May 28 15:24:17 devenv containerd[696]: time="2024-05-28T15:24:17.517880631Z" level=info msg="StartContainer for \"a440059d316ac8eacd4f52edfceb2565b905f99b0b799267b5b1e7e088d55215\"" May 28 15:24:17 devenv containerd[696]: time="2024-05-28T15:24:17.583881014Z" level=info msg="StartContainer for \"a440059d316ac8eacd4f52edfceb2565b905f99b0b799267b5b1e7e088d55215\" returns successfully" May 28 15:24:17 devenv containerd[696]: time="2024-05-28T15:24:17.620372292Z" level=info msg="shim disconnected" id=a440059d316ac8eacd4f52edfceb2565b905f99b0b799267b5b1e7e088d55215 May 28 15:24:17 devenv containerd[696]: time="2024-05-28T15:24:17.620426002Z" level=warning msg="cleaning up after shim disconnected" id=a440059d316ac8eacd4f52edfceb2565b905f99b0b799267b5b1e7e088d55215 namespace=k8s.io May 28 15:24:17 devenv containerd[696]: time="2024-05-28T15:24:17.620436405Z" level=info msg="cleaning up dead shim" May 28 15:24:17 devenv containerd[696]: time="2024-05-28T15:24:17.629364837Z" level=warning msg="cleanup warnings time=\"2024-05-28T15:24:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2315 runtime=io.containerd.runc.v2\n" May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.108721579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:ingress-nginx-admission-patch-hqbms,Uid:4a2ae3e5-5039-4f5d-aabc-42e91338aaf9,Namespace:ingress-nginx,Attempt:0,}" May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.145545364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.145665538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.145677021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.146087786Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/45359f27ba199bea6c55dd231f8845ad76ec4f898dc3c7005a32b4df31c48698 pid=2361 runtime=io.containerd.runc.v2 May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.196434940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:ingress-nginx-admission-patch-hqbms,Uid:4a2ae3e5-5039-4f5d-aabc-42e91338aaf9,Namespace:ingress-nginx,Attempt:0,} returns sandbox id \"45359f27ba199bea6c55dd231f8845ad76ec4f898dc3c7005a32b4df31c48698\"" May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.198693908Z" level=info msg="CreateContainer within sandbox \"45359f27ba199bea6c55dd231f8845ad76ec4f898dc3c7005a32b4df31c48698\" for container &ContainerMetadata{Name:patch,Attempt:0,}" May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.208266261Z" level=info msg="CreateContainer within sandbox \"45359f27ba199bea6c55dd231f8845ad76ec4f898dc3c7005a32b4df31c48698\" for &ContainerMetadata{Name:patch,Attempt:0,} returns container id \"5ab8780430067d69d9420132591f864d0bc512a1bfebd2ac5d5063c8b860c6e0\"" May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.208802615Z" level=info msg="StartContainer for \"5ab8780430067d69d9420132591f864d0bc512a1bfebd2ac5d5063c8b860c6e0\"" May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.253328512Z" level=info msg="StartContainer for \"5ab8780430067d69d9420132591f864d0bc512a1bfebd2ac5d5063c8b860c6e0\" returns successfully" May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.309321614Z" level=info msg="shim disconnected" id=5ab8780430067d69d9420132591f864d0bc512a1bfebd2ac5d5063c8b860c6e0 May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.309394015Z" level=warning msg="cleaning up after shim disconnected" id=5ab8780430067d69d9420132591f864d0bc512a1bfebd2ac5d5063c8b860c6e0 namespace=k8s.io May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.309405573Z" level=info msg="cleaning up dead shim" May 28 15:24:18 devenv containerd[696]: time="2024-05-28T15:24:18.319101426Z" level=warning msg="cleanup warnings time=\"2024-05-28T15:24:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2443 runtime=io.containerd.runc.v2\n" May 28 15:24:19 devenv containerd[696]: time="2024-05-28T15:24:19.210274186Z" level=info msg="StopPodSandbox for \"fc44d4a2850fbf320fb6e99769aed4dd6574b412f8b72b4ee8c9d86263a56a3f\"" May 28 15:24:19 devenv containerd[696]: time="2024-05-28T15:24:19.210318617Z" level=info msg="Container to stop \"a440059d316ac8eacd4f52edfceb2565b905f99b0b799267b5b1e7e088d55215\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 28 15:24:19 devenv containerd[696]: time="2024-05-28T15:24:19.238195971Z" level=info msg="shim disconnected" id=fc44d4a2850fbf320fb6e99769aed4dd6574b412f8b72b4ee8c9d86263a56a3f May 28 15:24:19 devenv containerd[696]: time="2024-05-28T15:24:19.238293288Z" level=warning msg="cleaning up after shim disconnected" id=fc44d4a2850fbf320fb6e99769aed4dd6574b412f8b72b4ee8c9d86263a56a3f namespace=k8s.io May 28 15:24:19 devenv containerd[696]: time="2024-05-28T15:24:19.238305905Z" level=info msg="cleaning up dead shim" May 28 15:24:19 devenv containerd[696]: time="2024-05-28T15:24:19.247833733Z" level=warning msg="cleanup warnings time=\"2024-05-28T15:24:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2474 runtime=io.containerd.runc.v2\n" May 28 15:24:19 devenv containerd[696]: time="2024-05-28T15:24:19.263854624Z" level=info msg="TearDown network for sandbox \"fc44d4a2850fbf320fb6e99769aed4dd6574b412f8b72b4ee8c9d86263a56a3f\" successfully" May 28 15:24:19 devenv containerd[696]: time="2024-05-28T15:24:19.263902623Z" level=info msg="StopPodSandbox for \"fc44d4a2850fbf320fb6e99769aed4dd6574b412f8b72b4ee8c9d86263a56a3f\" returns successfully" May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.018014393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:ingress-nginx-controller-768f948f8f-gd46m,Uid:e89604da-2114-4643-872d-2fd4d350bacc,Namespace:ingress-nginx,Attempt:0,}" May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.110511715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.110613098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.110625403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.111009692Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b0a999ad645e17a62aec3f3a91660b7106b2233428be3ca70b2ec34ece69fe3 pid=2577 runtime=io.containerd.runc.v2 May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.161014674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:ingress-nginx-controller-768f948f8f-gd46m,Uid:e89604da-2114-4643-872d-2fd4d350bacc,Namespace:ingress-nginx,Attempt:0,} returns sandbox id \"5b0a999ad645e17a62aec3f3a91660b7106b2233428be3ca70b2ec34ece69fe3\"" May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.162460474Z" level=info msg="PullImage \"registry.k8s.io/ingress-nginx/controller:v1.10.1@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e\"" May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.216202871Z" level=info msg="StopPodSandbox for \"45359f27ba199bea6c55dd231f8845ad76ec4f898dc3c7005a32b4df31c48698\"" May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.216555479Z" level=info msg="Container to stop \"5ab8780430067d69d9420132591f864d0bc512a1bfebd2ac5d5063c8b860c6e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.249638020Z" level=info msg="shim disconnected" id=45359f27ba199bea6c55dd231f8845ad76ec4f898dc3c7005a32b4df31c48698 May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.249701610Z" level=warning msg="cleaning up after shim disconnected" id=45359f27ba199bea6c55dd231f8845ad76ec4f898dc3c7005a32b4df31c48698 namespace=k8s.io May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.249713419Z" level=info msg="cleaning up dead shim" May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.259169957Z" level=warning msg="cleanup warnings time=\"2024-05-28T15:24:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2630 runtime=io.containerd.runc.v2\n" May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.272591628Z" level=info msg="TearDown network for sandbox \"45359f27ba199bea6c55dd231f8845ad76ec4f898dc3c7005a32b4df31c48698\" successfully" May 28 15:24:20 devenv containerd[696]: time="2024-05-28T15:24:20.272636951Z" level=info msg="StopPodSandbox for \"45359f27ba199bea6c55dd231f8845ad76ec4f898dc3c7005a32b4df31c48698\" returns successfully" May 28 15:24:29 devenv containerd[696]: time="2024-05-28T15:24:29.823491124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 28 15:24:29 devenv containerd[696]: time="2024-05-28T15:24:29.825194119Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ee54966f3891d75b255d160236368a4f9d3b588d32fb44bd04aea5101143e829,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 28 15:24:29 devenv containerd[696]: time="2024-05-28T15:24:29.827063457Z" level=info msg="PullImage \"registry.k8s.io/ingress-nginx/controller:v1.10.1@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e\" returns image reference \"sha256:ee54966f3891d75b255d160236368a4f9d3b588d32fb44bd04aea5101143e829\"" May 28 15:24:29 devenv containerd[696]: time="2024-05-28T15:24:29.829386826Z" level=info msg="CreateContainer within sandbox \"5b0a999ad645e17a62aec3f3a91660b7106b2233428be3ca70b2ec34ece69fe3\" for container &ContainerMetadata{Name:controller,Attempt:0,}" May 28 15:24:29 devenv containerd[696]: time="2024-05-28T15:24:29.838257882Z" level=info msg="CreateContainer within sandbox \"5b0a999ad645e17a62aec3f3a91660b7106b2233428be3ca70b2ec34ece69fe3\" for &ContainerMetadata{Name:controller,Attempt:0,} returns container id \"21f2d7a85340be40e492d11249a0cd9f431fcca4aff8858ab4989186c26ba354\"" May 28 15:24:29 devenv containerd[696]: time="2024-05-28T15:24:29.838750889Z" level=info msg="StartContainer for \"21f2d7a85340be40e492d11249a0cd9f431fcca4aff8858ab4989186c26ba354\"" May 28 15:24:29 devenv containerd[696]: time="2024-05-28T15:24:29.874834749Z" level=info msg="StartContainer for \"21f2d7a85340be40e492d11249a0cd9f431fcca4aff8858ab4989186c26ba354\" returns successfully" ==> coredns [31ad4f3955ce9ad0d15142365d7429b6a23f091dfa419502f651184dbdcee14e] <== .:53 [INFO] plugin/reload: Running configuration SHA512 = 0acd057f3a0f4709031c7dfc71869eb076b357e33cc3f9e8c7bbf24d03af38ef7635b34367a89d45adab17a5391a1c2d058603c581e1c5f4a21732bf72371934 CoreDNS-1.11.1 linux/amd64, go1.20.7, ae2bbc2 [INFO] 127.0.0.1:53954 - 32248 "HINFO IN 5871168934709757088.6320347471249865962. udp 57 false 512" - - 0 6.00273133s [ERROR] plugin/errors: 2 5871168934709757088.6320347471249865962. HINFO: read udp 10.244.0.3:41831->192.168.49.1:53: i/o timeout [INFO] 127.0.0.1:56372 - 41825 "HINFO IN 5871168934709757088.6320347471249865962. udp 57 false 512" - - 0 2.000417913s [ERROR] plugin/errors: 2 5871168934709757088.6320347471249865962. HINFO: read udp 10.244.0.3:47268->192.168.49.1:53: i/o timeout [INFO] 127.0.0.1:35187 - 38147 "HINFO IN 5871168934709757088.6320347471249865962. udp 57 false 512" - - 0 6.002649717s [ERROR] plugin/errors: 2 5871168934709757088.6320347471249865962. HINFO: read udp 10.244.0.3:51632->192.168.49.1:53: i/o timeout [INFO] 127.0.0.1:37224 - 18434 "HINFO IN 5871168934709757088.6320347471249865962. udp 57 false 512" - - 0 2.000507256s [ERROR] plugin/errors: 2 5871168934709757088.6320347471249865962. HINFO: read udp 10.244.0.3:36998->192.168.49.1:53: i/o timeout [INFO] 127.0.0.1:56872 - 54170 "HINFO IN 5871168934709757088.6320347471249865962. udp 57 false 512" - - 0 2.001016536s [ERROR] plugin/errors: 2 5871168934709757088.6320347471249865962. HINFO: read udp 10.244.0.3:45730->192.168.49.1:53: i/o timeout [INFO] 127.0.0.1:36850 - 25564 "HINFO IN 5871168934709757088.6320347471249865962. udp 57 false 512" - - 0 2.000991315s [ERROR] plugin/errors: 2 5871168934709757088.6320347471249865962. HINFO: read udp 10.244.0.3:58323->192.168.49.1:53: i/o timeout [INFO] 127.0.0.1:43985 - 5709 "HINFO IN 5871168934709757088.6320347471249865962. udp 57 false 512" - - 0 2.000640119s [ERROR] plugin/errors: 2 5871168934709757088.6320347471249865962. HINFO: read udp 10.244.0.3:50856->192.168.49.1:53: i/o timeout [INFO] 127.0.0.1:40503 - 59742 "HINFO IN 5871168934709757088.6320347471249865962. udp 57 false 512" - - 0 2.000720649s [ERROR] plugin/errors: 2 5871168934709757088.6320347471249865962. HINFO: read udp 10.244.0.3:36579->192.168.49.1:53: i/o timeout [INFO] 127.0.0.1:54699 - 8129 "HINFO IN 5871168934709757088.6320347471249865962. udp 57 false 512" - - 0 2.000856083s [ERROR] plugin/errors: 2 5871168934709757088.6320347471249865962. HINFO: read udp 10.244.0.3:38681->192.168.49.1:53: i/o timeout [INFO] 127.0.0.1:41219 - 3088 "HINFO IN 5871168934709757088.6320347471249865962. udp 57 false 512" - - 0 2.001376433s [ERROR] plugin/errors: 2 5871168934709757088.6320347471249865962. HINFO: read udp 10.244.0.3:60938->192.168.49.1:53: i/o timeout ==> describe nodes <== Name: devenv Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=devenv kubernetes.io/os=linux minikube.k8s.io/commit=v1.33.1 minikube.k8s.io/name=devenv minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2024_05_28T11_23_50_0700 minikube.k8s.io/version=v1.33.1 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Tue, 28 May 2024 15:23:47 +0000 Taints: Unschedulable: false Lease: HolderIdentity: devenv AcquireTime: RenewTime: Tue, 28 May 2024 15:25:01 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Tue, 28 May 2024 15:24:51 +0000 Tue, 28 May 2024 15:23:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 28 May 2024 15:24:51 +0000 Tue, 28 May 2024 15:23:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 28 May 2024 15:24:51 +0000 Tue, 28 May 2024 15:23:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 28 May 2024 15:24:51 +0000 Tue, 28 May 2024 15:23:47 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: devenv Capacity: cpu: 4 ephemeral-storage: 104266732Ki hugepages-2Mi: 0 memory: 4000880Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 104266732Ki hugepages-2Mi: 0 memory: 4000880Ki pods: 110 System Info: Machine ID: af936c24485d4edfa7a1170fd3d6aae3 System UUID: c68467cd-bead-426e-8787-e03e251a0034 Boot ID: bf664500-989b-43df-aecf-1f90d1e186d4 Kernel Version: 6.8.8-300.fc40.x86_64 OS Image: Ubuntu 22.04.4 LTS Operating System: linux Architecture: amd64 Container Runtime Version: containerd://1.6.31 Kubelet Version: v1.30.0 Kube-Proxy Version: v1.30.0 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- ingress-nginx ingress-nginx-controller-768f948f8f-gd46m 100m (2%!)(MISSING) 0 (0%!)(MISSING) 90Mi (2%!)(MISSING) 0 (0%!)(MISSING) 65s kube-system coredns-7db6d8ff4d-8twbg 100m (2%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 65s kube-system etcd-devenv 100m (2%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 78s kube-system kindnet-h7kkw 100m (2%!)(MISSING) 100m (2%!)(MISSING) 50Mi (1%!)(MISSING) 50Mi (1%!)(MISSING) 65s kube-system kube-apiserver-devenv 250m (6%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 79s kube-system kube-controller-manager-devenv 200m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 80s kube-system kube-proxy-ctc8b 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 65s kube-system kube-scheduler-devenv 100m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 78s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 76s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 950m (23%!)(MISSING) 100m (2%!)(MISSING) memory 310Mi (7%!)(MISSING) 220Mi (5%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 64s kube-proxy Normal Starting 84s kubelet Starting kubelet. Normal NodeHasSufficientMemory 84s (x8 over 84s) kubelet Node devenv status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 84s (x8 over 84s) kubelet Node devenv status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 84s (x7 over 84s) kubelet Node devenv status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 84s kubelet Updated Node Allocatable limit across pods Normal Starting 78s kubelet Starting kubelet. Normal NodeAllocatableEnforced 78s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 78s kubelet Node devenv status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 78s kubelet Node devenv status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 78s kubelet Node devenv status is now: NodeHasSufficientPID Normal RegisteredNode 66s node-controller Node devenv event: Registered Node devenv in Controller ==> dmesg <== dmesg: read kernel buffer failed: Operation not permitted ==> etcd [e793e670075321eb131b63e091b4d512400c3f68a5a0eeac20625118db558155] <== {"level":"warn","ts":"2024-05-28T15:23:45.180515Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-05-28T15:23:45.180638Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=devenv=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=devenv","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"warn","ts":"2024-05-28T15:23:45.180724Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-05-28T15:23:45.180734Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2024-05-28T15:23:45.180761Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-05-28T15:23:45.181494Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2024-05-28T15:23:45.181594Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":false,"name":"devenv","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"devenv=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2024-05-28T15:23:45.184443Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"2.560473ms"} {"level":"info","ts":"2024-05-28T15:23:45.188691Z","caller":"etcdserver/raft.go:495","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2024-05-28T15:23:45.188807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2024-05-28T15:23:45.188833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2024-05-28T15:23:45.18884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2024-05-28T15:23:45.188845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2024-05-28T15:23:45.188869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2024-05-28T15:23:45.19216Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2024-05-28T15:23:45.19347Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2024-05-28T15:23:45.194179Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2024-05-28T15:23:45.195699Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.12","cluster-version":"to_be_decided"} {"level":"info","ts":"2024-05-28T15:23:45.195862Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2024-05-28T15:23:45.242656Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2024-05-28T15:23:45.242794Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2024-05-28T15:23:45.242817Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2024-05-28T15:23:45.243707Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-05-28T15:23:45.244484Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2024-05-28T15:23:45.244618Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2024-05-28T15:23:45.244967Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2024-05-28T15:23:45.245096Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2024-05-28T15:23:45.246173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2024-05-28T15:23:45.246424Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2024-05-28T15:23:46.189171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2024-05-28T15:23:46.189249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2024-05-28T15:23:46.189264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2024-05-28T15:23:46.189364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2024-05-28T15:23:46.189395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2024-05-28T15:23:46.18954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2024-05-28T15:23:46.189611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2024-05-28T15:23:46.190813Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2024-05-28T15:23:46.191746Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:devenv ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2024-05-28T15:23:46.191835Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-05-28T15:23:46.192099Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2024-05-28T15:23:46.192181Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-05-28T15:23:46.192181Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2024-05-28T15:23:46.192284Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2024-05-28T15:23:46.193114Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2024-05-28T15:23:46.193284Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2024-05-28T15:23:46.194609Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"} {"level":"info","ts":"2024-05-28T15:23:46.195398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"} {"level":"info","ts":"2024-05-28T15:23:51.630676Z","caller":"traceutil/trace.go:171","msg":"trace[2056886709] transaction","detail":"{read_only:false; response_revision:310; number_of_response:1; }","duration":"201.655055ms","start":"2024-05-28T15:23:51.429002Z","end":"2024-05-28T15:23:51.630657Z","steps":["trace[2056886709] 'process raft request' (duration: 201.503748ms)"],"step_count":1} ==> kernel <== 15:25:08 up 2 min, 0 users, load average: 0.80, 0.44, 0.17 Linux devenv 6.8.8-300.fc40.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Apr 27 17:53:31 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 22.04.4 LTS" ==> kindnet [fe3cc992b9a1105fdab5c77782b20ac7c8ba0d20ecba2f89eff78eaf8158edd0] <== I0528 15:24:04.244223 1 main.go:102] connected to apiserver: https://10.96.0.1:443 I0528 15:24:04.244294 1 main.go:107] hostIP = 192.168.49.2 podIP = 192.168.49.2 I0528 15:24:04.244401 1 main.go:116] setting mtu 1500 for CNI I0528 15:24:04.244419 1 main.go:146] kindnetd IP family: "ipv4" I0528 15:24:04.244435 1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16] I0528 15:24:04.631501 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0528 15:24:04.631562 1 main.go:227] handling current node I0528 15:24:14.742579 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0528 15:24:14.742625 1 main.go:227] handling current node I0528 15:24:24.749215 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0528 15:24:24.749251 1 main.go:227] handling current node I0528 15:24:34.753763 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0528 15:24:34.753843 1 main.go:227] handling current node I0528 15:24:44.760392 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0528 15:24:44.760449 1 main.go:227] handling current node I0528 15:24:54.765461 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0528 15:24:54.765515 1 main.go:227] handling current node I0528 15:25:04.771209 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0528 15:25:04.771260 1 main.go:227] handling current node ==> kube-apiserver [073a94b480fbe55baa34efb8c40243a92b8e7e015de58c50f71804b2a93d5774] <== I0528 15:23:47.291858 1 system_namespaces_controller.go:67] Starting system namespaces controller I0528 15:23:47.291875 1 apf_controller.go:374] Starting API Priority and Fairness config controller I0528 15:23:47.292158 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0528 15:23:47.292166 1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller I0528 15:23:47.292205 1 available_controller.go:423] Starting AvailableConditionController I0528 15:23:47.292212 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0528 15:23:47.292248 1 gc_controller.go:78] Starting apiserver lease garbage collector I0528 15:23:47.292257 1 controller.go:116] Starting legacy_token_tracking_controller I0528 15:23:47.292264 1 shared_informer.go:313] Waiting for caches to sync for configmaps I0528 15:23:47.292292 1 controller.go:80] Starting OpenAPI V3 AggregationController I0528 15:23:47.292303 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0528 15:23:47.292310 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0528 15:23:47.292355 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0528 15:23:47.292367 1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister I0528 15:23:47.292398 1 controller.go:139] Starting OpenAPI controller I0528 15:23:47.292416 1 controller.go:87] Starting OpenAPI V3 controller I0528 15:23:47.292430 1 naming_controller.go:291] Starting NamingConditionController I0528 15:23:47.292439 1 establishing_controller.go:76] Starting EstablishingController I0528 15:23:47.292452 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0528 15:23:47.292461 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0528 15:23:47.292469 1 crd_finalizer.go:266] Starting CRDFinalizer I0528 15:23:47.292692 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0528 15:23:47.292765 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0528 15:23:47.392094 1 apf_controller.go:379] Running API Priority and Fairness config worker I0528 15:23:47.392134 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process I0528 15:23:47.392236 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller I0528 15:23:47.392349 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0528 15:23:47.392364 1 cache.go:39] Caches are synced for AvailableConditionController controller I0528 15:23:47.392396 1 handler_discovery.go:447] Starting ResourceDiscoveryManager I0528 15:23:47.392484 1 shared_informer.go:320] Caches are synced for configmaps I0528 15:23:47.392608 1 shared_informer.go:320] Caches are synced for crd-autoregister I0528 15:23:47.392629 1 aggregator.go:165] initial CRD sync complete... I0528 15:23:47.392634 1 autoregister_controller.go:141] Starting autoregister controller I0528 15:23:47.392638 1 cache.go:32] Waiting for caches to sync for autoregister controller I0528 15:23:47.392642 1 cache.go:39] Caches are synced for autoregister controller E0528 15:23:47.424953 1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms" I0528 15:23:47.447469 1 shared_informer.go:320] Caches are synced for node_authorizer E0528 15:23:47.447548 1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found I0528 15:23:47.448732 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator] I0528 15:23:47.448769 1 policy_source.go:224] refreshing policies I0528 15:23:47.493261 1 controller.go:615] quota admission added evaluator for: namespaces I0528 15:23:47.627818 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io I0528 15:23:48.298306 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0528 15:23:48.302883 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0528 15:23:48.302899 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0528 15:23:48.737904 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0528 15:23:48.771573 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0528 15:23:48.906774 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"} W0528 15:23:48.914921 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0528 15:23:48.915930 1 controller.go:615] quota admission added evaluator for: endpoints I0528 15:23:48.920471 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io I0528 15:23:49.360408 1 controller.go:615] quota admission added evaluator for: serviceaccounts I0528 15:23:50.070185 1 controller.go:615] quota admission added evaluator for: deployments.apps I0528 15:23:50.078760 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"} I0528 15:23:50.085357 1 controller.go:615] quota admission added evaluator for: daemonsets.apps I0528 15:23:52.707467 1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.98.56.220"} I0528 15:23:52.715334 1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.106.223.196"} I0528 15:23:52.728004 1 controller.go:615] quota admission added evaluator for: jobs.batch I0528 15:24:02.937769 1 controller.go:615] quota admission added evaluator for: replicasets.apps I0528 15:24:03.437804 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps ==> kube-controller-manager [7000737def60e008808c12044d61a6b1165482fb3477ce23d94a375c214004d2] <== I0528 15:24:02.569358 1 shared_informer.go:320] Caches are synced for GC I0528 15:24:02.584097 1 shared_informer.go:320] Caches are synced for disruption I0528 15:24:02.586624 1 shared_informer.go:320] Caches are synced for PVC protection I0528 15:24:02.586667 1 shared_informer.go:320] Caches are synced for TTL I0528 15:24:02.586667 1 shared_informer.go:320] Caches are synced for ephemeral I0528 15:24:02.588917 1 shared_informer.go:320] Caches are synced for node I0528 15:24:02.588951 1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller" I0528 15:24:02.588966 1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller" I0528 15:24:02.588970 1 shared_informer.go:313] Waiting for caches to sync for cidrallocator I0528 15:24:02.588973 1 shared_informer.go:320] Caches are synced for cidrallocator I0528 15:24:02.591975 1 shared_informer.go:320] Caches are synced for namespace I0528 15:24:02.593594 1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="devenv" podCIDRs=["10.244.0.0/24"] I0528 15:24:02.595788 1 shared_informer.go:320] Caches are synced for attach detach I0528 15:24:02.598271 1 shared_informer.go:320] Caches are synced for persistent volume I0528 15:24:02.600756 1 shared_informer.go:320] Caches are synced for endpoint I0528 15:24:02.606273 1 shared_informer.go:320] Caches are synced for service account I0528 15:24:02.633600 1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator I0528 15:24:02.663773 1 shared_informer.go:320] Caches are synced for endpoint_slice I0528 15:24:02.685103 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status I0528 15:24:02.686262 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring I0528 15:24:02.693249 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0528 15:24:02.693364 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0528 15:24:02.697416 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0528 15:24:02.700356 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0528 15:24:02.701257 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0528 15:24:02.702362 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0528 15:24:02.709725 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0528 15:24:02.715335 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0528 15:24:02.761694 1 shared_informer.go:320] Caches are synced for HPA I0528 15:24:02.789484 1 shared_informer.go:320] Caches are synced for resource quota I0528 15:24:02.821176 1 shared_informer.go:320] Caches are synced for resource quota I0528 15:24:03.215502 1 shared_informer.go:320] Caches are synced for garbage collector I0528 15:24:03.235285 1 shared_informer.go:320] Caches are synced for garbage collector I0528 15:24:03.235324 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller" I0528 15:24:03.803696 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="861.457034ms" I0528 15:24:03.803955 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="862.463575ms" I0528 15:24:03.823638 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.660622ms" I0528 15:24:03.824181 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="250.092µs" I0528 15:24:03.832118 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.037259ms" I0528 15:24:03.837657 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="33.917211ms" I0528 15:24:03.837724 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="37.922µs" I0528 15:24:03.841584 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="55.616µs" I0528 15:24:17.211711 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.281µs" I0528 15:24:17.252934 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.446559ms" I0528 15:24:17.253232 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.215µs" I0528 15:24:18.215643 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0528 15:24:19.218943 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0528 15:24:19.273249 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0528 15:24:20.226881 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0528 15:24:20.277978 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0528 15:24:20.281865 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0528 15:24:20.282880 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0528 15:24:20.287837 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0528 15:24:21.225968 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0528 15:24:21.288325 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0528 15:24:21.293243 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0528 15:24:21.297020 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0528 15:24:30.248927 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="44.132µs" I0528 15:24:40.038300 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="6.895423ms" I0528 15:24:40.038400 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="33.48µs" ==> kube-proxy [a6e769bd785a043eb98118bb0654f917b622bfdc9295fa962b92095a0fe1cbb5] <== I0528 15:24:03.973987 1 server_linux.go:69] "Using iptables proxy" I0528 15:24:03.979877 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"] I0528 15:24:04.002697 1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4" I0528 15:24:04.002753 1 server_linux.go:165] "Using iptables Proxier" I0528 15:24:04.005194 1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6" I0528 15:24:04.005225 1 server_linux.go:528] "Defaulting to no-op detect-local" I0528 15:24:04.005245 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" I0528 15:24:04.005546 1 server.go:872] "Version info" version="v1.30.0" I0528 15:24:04.005601 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0528 15:24:04.207516 1 config.go:319] "Starting node config controller" I0528 15:24:04.207560 1 shared_informer.go:313] Waiting for caches to sync for node config I0528 15:24:04.207535 1 config.go:101] "Starting endpoint slice config controller" I0528 15:24:04.207643 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config I0528 15:24:04.207543 1 config.go:192] "Starting service config controller" I0528 15:24:04.207653 1 shared_informer.go:313] Waiting for caches to sync for service config I0528 15:24:04.308571 1 shared_informer.go:320] Caches are synced for service config I0528 15:24:04.308683 1 shared_informer.go:320] Caches are synced for endpoint slice config I0528 15:24:04.308947 1 shared_informer.go:320] Caches are synced for node config ==> kube-scheduler [2e80204f6c7fce43ad1f75c2925b3cdf313ef8fe353a5cb949f034a63306a8a7] <== W0528 15:23:47.308965 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0528 15:23:47.346634 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0" I0528 15:23:47.347112 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0528 15:23:47.352394 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0528 15:23:47.352911 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259 I0528 15:23:47.352925 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0528 15:23:47.368845 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file W0528 15:23:47.372182 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0528 15:23:47.372516 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0528 15:23:47.372733 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0528 15:23:47.372794 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0528 15:23:47.373106 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0528 15:23:47.373153 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0528 15:23:47.373134 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0528 15:23:47.373470 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0528 15:23:47.372622 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0528 15:23:47.373604 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0528 15:23:47.373409 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0528 15:23:47.373629 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0528 15:23:47.373977 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0528 15:23:47.374015 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0528 15:23:47.374680 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0528 15:23:47.374725 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0528 15:23:47.374802 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0528 15:23:47.374845 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0528 15:23:47.374901 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0528 15:23:47.374915 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0528 15:23:47.375002 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0528 15:23:47.375091 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0528 15:23:47.375191 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0528 15:23:47.375211 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0528 15:23:47.375299 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0528 15:23:47.375385 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0528 15:23:47.375685 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0528 15:23:47.375906 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0528 15:23:47.376031 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0528 15:23:47.376052 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0528 15:23:48.265502 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0528 15:23:48.265597 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0528 15:23:48.267389 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0528 15:23:48.267433 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0528 15:23:48.324626 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0528 15:23:48.324655 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0528 15:23:48.441835 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0528 15:23:48.441863 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0528 15:23:48.470806 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0528 15:23:48.470853 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0528 15:23:48.478231 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0528 15:23:48.478341 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0528 15:23:48.492833 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0528 15:23:48.492887 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0528 15:23:48.495192 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0528 15:23:48.495241 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0528 15:23:48.497180 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0528 15:23:48.497223 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0528 15:23:48.499046 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0528 15:23:48.499193 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0528 15:23:48.636548 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0528 15:23:48.636594 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0528 15:23:50.470763 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== May 28 15:24:02 devenv kubelet[1422]: I0528 15:24:02.891787 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88n84\" (UniqueName: \"kubernetes.io/projected/4a2ae3e5-5039-4f5d-aabc-42e91338aaf9-kube-api-access-88n84\") pod \"ingress-nginx-admission-patch-hqbms\" (UID: \"4a2ae3e5-5039-4f5d-aabc-42e91338aaf9\") " pod="ingress-nginx/ingress-nginx-admission-patch-hqbms" May 28 15:24:02 devenv kubelet[1422]: E0528 15:24:02.997382 1422 projected.go:294] Couldn't get configMap ingress-nginx/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 28 15:24:02 devenv kubelet[1422]: E0528 15:24:02.997448 1422 projected.go:200] Error preparing data for projected volume kube-api-access-s78tq for pod ingress-nginx/ingress-nginx-admission-create-sz79q: configmap "kube-root-ca.crt" not found May 28 15:24:02 devenv kubelet[1422]: E0528 15:24:02.997489 1422 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e0fc98c-3fb5-48e5-b8d7-c581d363dd46-kube-api-access-s78tq podName:0e0fc98c-3fb5-48e5-b8d7-c581d363dd46 nodeName:}" failed. No retries permitted until 2024-05-28 15:24:03.497476069 +0000 UTC m=+13.685835477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s78tq" (UniqueName: "kubernetes.io/projected/0e0fc98c-3fb5-48e5-b8d7-c581d363dd46-kube-api-access-s78tq") pod "ingress-nginx-admission-create-sz79q" (UID: "0e0fc98c-3fb5-48e5-b8d7-c581d363dd46") : configmap "kube-root-ca.crt" not found May 28 15:24:02 devenv kubelet[1422]: E0528 15:24:02.997566 1422 projected.go:294] Couldn't get configMap ingress-nginx/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 28 15:24:02 devenv kubelet[1422]: E0528 15:24:02.997583 1422 projected.go:200] Error preparing data for projected volume kube-api-access-88n84 for pod ingress-nginx/ingress-nginx-admission-patch-hqbms: configmap "kube-root-ca.crt" not found May 28 15:24:02 devenv kubelet[1422]: E0528 15:24:02.997621 1422 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4a2ae3e5-5039-4f5d-aabc-42e91338aaf9-kube-api-access-88n84 podName:4a2ae3e5-5039-4f5d-aabc-42e91338aaf9 nodeName:}" failed. No retries permitted until 2024-05-28 15:24:03.497608457 +0000 UTC m=+13.685967866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-88n84" (UniqueName: "kubernetes.io/projected/4a2ae3e5-5039-4f5d-aabc-42e91338aaf9-kube-api-access-88n84") pod "ingress-nginx-admission-patch-hqbms" (UID: "4a2ae3e5-5039-4f5d-aabc-42e91338aaf9") : configmap "kube-root-ca.crt" not found May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.452394 1422 topology_manager.go:215] "Topology Admit Handler" podUID="c23b2547-8bfe-4cda-bd3c-554375f05280" podNamespace="kube-system" podName="kube-proxy-ctc8b" May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.461932 1422 topology_manager.go:215] "Topology Admit Handler" podUID="102a9a45-c9ba-4cfe-a9c1-0aede4a606ec" podNamespace="kube-system" podName="kindnet-h7kkw" May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.596687 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/102a9a45-c9ba-4cfe-a9c1-0aede4a606ec-cni-cfg\") pod \"kindnet-h7kkw\" (UID: \"102a9a45-c9ba-4cfe-a9c1-0aede4a606ec\") " pod="kube-system/kindnet-h7kkw" May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.596740 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/102a9a45-c9ba-4cfe-a9c1-0aede4a606ec-lib-modules\") pod \"kindnet-h7kkw\" (UID: \"102a9a45-c9ba-4cfe-a9c1-0aede4a606ec\") " pod="kube-system/kindnet-h7kkw" May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.596757 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c23b2547-8bfe-4cda-bd3c-554375f05280-kube-proxy\") pod \"kube-proxy-ctc8b\" (UID: \"c23b2547-8bfe-4cda-bd3c-554375f05280\") " pod="kube-system/kube-proxy-ctc8b" May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.596769 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/102a9a45-c9ba-4cfe-a9c1-0aede4a606ec-xtables-lock\") pod \"kindnet-h7kkw\" (UID: \"102a9a45-c9ba-4cfe-a9c1-0aede4a606ec\") " pod="kube-system/kindnet-h7kkw" May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.596780 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzzlv\" (UniqueName: \"kubernetes.io/projected/102a9a45-c9ba-4cfe-a9c1-0aede4a606ec-kube-api-access-mzzlv\") pod \"kindnet-h7kkw\" (UID: \"102a9a45-c9ba-4cfe-a9c1-0aede4a606ec\") " pod="kube-system/kindnet-h7kkw" May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.596793 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6w58\" (UniqueName: \"kubernetes.io/projected/c23b2547-8bfe-4cda-bd3c-554375f05280-kube-api-access-v6w58\") pod \"kube-proxy-ctc8b\" (UID: \"c23b2547-8bfe-4cda-bd3c-554375f05280\") " pod="kube-system/kube-proxy-ctc8b" May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.596805 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c23b2547-8bfe-4cda-bd3c-554375f05280-xtables-lock\") pod \"kube-proxy-ctc8b\" (UID: \"c23b2547-8bfe-4cda-bd3c-554375f05280\") " pod="kube-system/kube-proxy-ctc8b" May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.596832 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c23b2547-8bfe-4cda-bd3c-554375f05280-lib-modules\") pod \"kube-proxy-ctc8b\" (UID: \"c23b2547-8bfe-4cda-bd3c-554375f05280\") " pod="kube-system/kube-proxy-ctc8b" May 28 15:24:03 devenv kubelet[1422]: E0528 15:24:03.627555 1422 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0957fd595727da358e11ecf097ee7a427c0fb8ad8e63224c368d6bc82123045\": failed to find network info for sandbox \"e0957fd595727da358e11ecf097ee7a427c0fb8ad8e63224c368d6bc82123045\"" May 28 15:24:03 devenv kubelet[1422]: E0528 15:24:03.627614 1422 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0957fd595727da358e11ecf097ee7a427c0fb8ad8e63224c368d6bc82123045\": failed to find network info for sandbox \"e0957fd595727da358e11ecf097ee7a427c0fb8ad8e63224c368d6bc82123045\"" pod="ingress-nginx/ingress-nginx-admission-create-sz79q" May 28 15:24:03 devenv kubelet[1422]: E0528 15:24:03.627632 1422 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0957fd595727da358e11ecf097ee7a427c0fb8ad8e63224c368d6bc82123045\": failed to find network info for sandbox \"e0957fd595727da358e11ecf097ee7a427c0fb8ad8e63224c368d6bc82123045\"" pod="ingress-nginx/ingress-nginx-admission-create-sz79q" May 28 15:24:03 devenv kubelet[1422]: E0528 15:24:03.627697 1422 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-nginx-admission-create-sz79q_ingress-nginx(0e0fc98c-3fb5-48e5-b8d7-c581d363dd46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-nginx-admission-create-sz79q_ingress-nginx(0e0fc98c-3fb5-48e5-b8d7-c581d363dd46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0957fd595727da358e11ecf097ee7a427c0fb8ad8e63224c368d6bc82123045\\\": failed to find network info for sandbox \\\"e0957fd595727da358e11ecf097ee7a427c0fb8ad8e63224c368d6bc82123045\\\"\"" pod="ingress-nginx/ingress-nginx-admission-create-sz79q" podUID="0e0fc98c-3fb5-48e5-b8d7-c581d363dd46" May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.803644 1422 topology_manager.go:215] "Topology Admit Handler" podUID="d76c0160-1409-42cd-bed6-318fb3544376" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8twbg" May 28 15:24:03 devenv kubelet[1422]: I0528 15:24:03.813923 1422 topology_manager.go:215] "Topology Admit Handler" podUID="e89604da-2114-4643-872d-2fd4d350bacc" podNamespace="ingress-nginx" podName="ingress-nginx-controller-768f948f8f-gd46m" May 28 15:24:03 devenv kubelet[1422]: E0528 15:24:03.920237 1422 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aaaf38b69a019fd1bbb054668c2704d42491f1e7ca73275d4ca5a3c218f6564\": failed to find network info for sandbox \"7aaaf38b69a019fd1bbb054668c2704d42491f1e7ca73275d4ca5a3c218f6564\"" May 28 15:24:03 devenv kubelet[1422]: E0528 15:24:03.920307 1422 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aaaf38b69a019fd1bbb054668c2704d42491f1e7ca73275d4ca5a3c218f6564\": failed to find network info for sandbox \"7aaaf38b69a019fd1bbb054668c2704d42491f1e7ca73275d4ca5a3c218f6564\"" pod="ingress-nginx/ingress-nginx-admission-patch-hqbms" May 28 15:24:03 devenv kubelet[1422]: E0528 15:24:03.920325 1422 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aaaf38b69a019fd1bbb054668c2704d42491f1e7ca73275d4ca5a3c218f6564\": failed to find network info for sandbox \"7aaaf38b69a019fd1bbb054668c2704d42491f1e7ca73275d4ca5a3c218f6564\"" pod="ingress-nginx/ingress-nginx-admission-patch-hqbms" May 28 15:24:03 devenv kubelet[1422]: E0528 15:24:03.920375 1422 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-nginx-admission-patch-hqbms_ingress-nginx(4a2ae3e5-5039-4f5d-aabc-42e91338aaf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-nginx-admission-patch-hqbms_ingress-nginx(4a2ae3e5-5039-4f5d-aabc-42e91338aaf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7aaaf38b69a019fd1bbb054668c2704d42491f1e7ca73275d4ca5a3c218f6564\\\": failed to find network info for sandbox \\\"7aaaf38b69a019fd1bbb054668c2704d42491f1e7ca73275d4ca5a3c218f6564\\\"\"" pod="ingress-nginx/ingress-nginx-admission-patch-hqbms" podUID="4a2ae3e5-5039-4f5d-aabc-42e91338aaf9" May 28 15:24:04 devenv kubelet[1422]: I0528 15:24:04.000112 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6mt\" (UniqueName: \"kubernetes.io/projected/d76c0160-1409-42cd-bed6-318fb3544376-kube-api-access-4g6mt\") pod \"coredns-7db6d8ff4d-8twbg\" (UID: \"d76c0160-1409-42cd-bed6-318fb3544376\") " pod="kube-system/coredns-7db6d8ff4d-8twbg" May 28 15:24:04 devenv kubelet[1422]: I0528 15:24:04.000161 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e89604da-2114-4643-872d-2fd4d350bacc-webhook-cert\") pod \"ingress-nginx-controller-768f948f8f-gd46m\" (UID: \"e89604da-2114-4643-872d-2fd4d350bacc\") " pod="ingress-nginx/ingress-nginx-controller-768f948f8f-gd46m" May 28 15:24:04 devenv kubelet[1422]: I0528 15:24:04.000177 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d76c0160-1409-42cd-bed6-318fb3544376-config-volume\") pod \"coredns-7db6d8ff4d-8twbg\" (UID: \"d76c0160-1409-42cd-bed6-318fb3544376\") " pod="kube-system/coredns-7db6d8ff4d-8twbg" May 28 15:24:04 devenv kubelet[1422]: I0528 15:24:04.000188 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpsz\" (UniqueName: \"kubernetes.io/projected/e89604da-2114-4643-872d-2fd4d350bacc-kube-api-access-nhpsz\") pod \"ingress-nginx-controller-768f948f8f-gd46m\" (UID: \"e89604da-2114-4643-872d-2fd4d350bacc\") " pod="ingress-nginx/ingress-nginx-controller-768f948f8f-gd46m" May 28 15:24:04 devenv kubelet[1422]: E0528 15:24:04.101293 1422 secret.go:194] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found May 28 15:24:04 devenv kubelet[1422]: E0528 15:24:04.101339 1422 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89604da-2114-4643-872d-2fd4d350bacc-webhook-cert podName:e89604da-2114-4643-872d-2fd4d350bacc nodeName:}" failed. No retries permitted until 2024-05-28 15:24:04.601329195 +0000 UTC m=+14.789688600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e89604da-2114-4643-872d-2fd4d350bacc-webhook-cert") pod "ingress-nginx-controller-768f948f8f-gd46m" (UID: "e89604da-2114-4643-872d-2fd4d350bacc") : secret "ingress-nginx-admission" not found May 28 15:24:04 devenv kubelet[1422]: I0528 15:24:04.178354 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-h7kkw" podStartSLOduration=1.178340356 podStartE2EDuration="1.178340356s" podCreationTimestamp="2024-05-28 15:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-28 15:24:04.178303822 +0000 UTC m=+14.366663234" watchObservedRunningTime="2024-05-28 15:24:04.178340356 +0000 UTC m=+14.366699767" May 28 15:24:04 devenv kubelet[1422]: I0528 15:24:04.185750 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.185735773 podStartE2EDuration="12.185735773s" podCreationTimestamp="2024-05-28 15:23:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-28 15:24:04.185572954 +0000 UTC m=+14.373932373" watchObservedRunningTime="2024-05-28 15:24:04.185735773 +0000 UTC m=+14.374095192" May 28 15:24:04 devenv kubelet[1422]: I0528 15:24:04.192416 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ctc8b" podStartSLOduration=1.192403634 podStartE2EDuration="1.192403634s" podCreationTimestamp="2024-05-28 15:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-28 15:24:04.191990941 +0000 UTC m=+14.380350355" watchObservedRunningTime="2024-05-28 15:24:04.192403634 +0000 UTC m=+14.380763044" May 28 15:24:04 devenv kubelet[1422]: E0528 15:24:04.428576 1422 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d794519ad0c993e57592a9a32a27831374398789efd7d51b3aa0127fac9495d\": failed to find network info for sandbox \"7d794519ad0c993e57592a9a32a27831374398789efd7d51b3aa0127fac9495d\"" May 28 15:24:04 devenv kubelet[1422]: E0528 15:24:04.428619 1422 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d794519ad0c993e57592a9a32a27831374398789efd7d51b3aa0127fac9495d\": failed to find network info for sandbox \"7d794519ad0c993e57592a9a32a27831374398789efd7d51b3aa0127fac9495d\"" pod="kube-system/coredns-7db6d8ff4d-8twbg" May 28 15:24:04 devenv kubelet[1422]: E0528 15:24:04.428635 1422 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d794519ad0c993e57592a9a32a27831374398789efd7d51b3aa0127fac9495d\": failed to find network info for sandbox \"7d794519ad0c993e57592a9a32a27831374398789efd7d51b3aa0127fac9495d\"" pod="kube-system/coredns-7db6d8ff4d-8twbg" May 28 15:24:04 devenv kubelet[1422]: E0528 15:24:04.428665 1422 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8twbg_kube-system(d76c0160-1409-42cd-bed6-318fb3544376)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8twbg_kube-system(d76c0160-1409-42cd-bed6-318fb3544376)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d794519ad0c993e57592a9a32a27831374398789efd7d51b3aa0127fac9495d\\\": failed to find network info for sandbox \\\"7d794519ad0c993e57592a9a32a27831374398789efd7d51b3aa0127fac9495d\\\"\"" pod="kube-system/coredns-7db6d8ff4d-8twbg" podUID="d76c0160-1409-42cd-bed6-318fb3544376" May 28 15:24:04 devenv kubelet[1422]: E0528 15:24:04.604998 1422 secret.go:194] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found May 28 15:24:04 devenv kubelet[1422]: E0528 15:24:04.605221 1422 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89604da-2114-4643-872d-2fd4d350bacc-webhook-cert podName:e89604da-2114-4643-872d-2fd4d350bacc nodeName:}" failed. No retries permitted until 2024-05-28 15:24:05.605192539 +0000 UTC m=+15.793551972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e89604da-2114-4643-872d-2fd4d350bacc-webhook-cert") pod "ingress-nginx-controller-768f948f8f-gd46m" (UID: "e89604da-2114-4643-872d-2fd4d350bacc") : secret "ingress-nginx-admission" not found May 28 15:24:05 devenv kubelet[1422]: E0528 15:24:05.612953 1422 secret.go:194] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found May 28 15:24:05 devenv kubelet[1422]: E0528 15:24:05.613016 1422 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89604da-2114-4643-872d-2fd4d350bacc-webhook-cert podName:e89604da-2114-4643-872d-2fd4d350bacc nodeName:}" failed. No retries permitted until 2024-05-28 15:24:07.61300212 +0000 UTC m=+17.801361530 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e89604da-2114-4643-872d-2fd4d350bacc-webhook-cert") pod "ingress-nginx-controller-768f948f8f-gd46m" (UID: "e89604da-2114-4643-872d-2fd4d350bacc") : secret "ingress-nginx-admission" not found May 28 15:24:07 devenv kubelet[1422]: E0528 15:24:07.627848 1422 secret.go:194] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found May 28 15:24:07 devenv kubelet[1422]: E0528 15:24:07.628036 1422 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89604da-2114-4643-872d-2fd4d350bacc-webhook-cert podName:e89604da-2114-4643-872d-2fd4d350bacc nodeName:}" failed. No retries permitted until 2024-05-28 15:24:11.628011491 +0000 UTC m=+21.816370917 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e89604da-2114-4643-872d-2fd4d350bacc-webhook-cert") pod "ingress-nginx-controller-768f948f8f-gd46m" (UID: "e89604da-2114-4643-872d-2fd4d350bacc") : secret "ingress-nginx-admission" not found May 28 15:24:10 devenv kubelet[1422]: I0528 15:24:10.514709 1422 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" May 28 15:24:10 devenv kubelet[1422]: I0528 15:24:10.516015 1422 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" May 28 15:24:11 devenv kubelet[1422]: E0528 15:24:11.658812 1422 secret.go:194] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found May 28 15:24:11 devenv kubelet[1422]: E0528 15:24:11.659001 1422 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89604da-2114-4643-872d-2fd4d350bacc-webhook-cert podName:e89604da-2114-4643-872d-2fd4d350bacc nodeName:}" failed. No retries permitted until 2024-05-28 15:24:19.658982676 +0000 UTC m=+29.847342090 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e89604da-2114-4643-872d-2fd4d350bacc-webhook-cert") pod "ingress-nginx-controller-768f948f8f-gd46m" (UID: "e89604da-2114-4643-872d-2fd4d350bacc") : secret "ingress-nginx-admission" not found May 28 15:24:17 devenv kubelet[1422]: I0528 15:24:17.241244 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8twbg" podStartSLOduration=14.241228364 podStartE2EDuration="14.241228364s" podCreationTimestamp="2024-05-28 15:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-28 15:24:17.212372255 +0000 UTC m=+27.400731665" watchObservedRunningTime="2024-05-28 15:24:17.241228364 +0000 UTC m=+27.429587776" May 28 15:24:19 devenv kubelet[1422]: I0528 15:24:19.422826 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s78tq\" (UniqueName: \"kubernetes.io/projected/0e0fc98c-3fb5-48e5-b8d7-c581d363dd46-kube-api-access-s78tq\") pod \"0e0fc98c-3fb5-48e5-b8d7-c581d363dd46\" (UID: \"0e0fc98c-3fb5-48e5-b8d7-c581d363dd46\") " May 28 15:24:19 devenv kubelet[1422]: I0528 15:24:19.426809 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0fc98c-3fb5-48e5-b8d7-c581d363dd46-kube-api-access-s78tq" (OuterVolumeSpecName: "kube-api-access-s78tq") pod "0e0fc98c-3fb5-48e5-b8d7-c581d363dd46" (UID: "0e0fc98c-3fb5-48e5-b8d7-c581d363dd46"). InnerVolumeSpecName "kube-api-access-s78tq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 28 15:24:19 devenv kubelet[1422]: I0528 15:24:19.523693 1422 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-s78tq\" (UniqueName: \"kubernetes.io/projected/0e0fc98c-3fb5-48e5-b8d7-c581d363dd46-kube-api-access-s78tq\") on node \"devenv\" DevicePath \"\"" May 28 15:24:20 devenv kubelet[1422]: I0528 15:24:20.215018 1422 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc44d4a2850fbf320fb6e99769aed4dd6574b412f8b72b4ee8c9d86263a56a3f" May 28 15:24:20 devenv kubelet[1422]: I0528 15:24:20.430284 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88n84\" (UniqueName: \"kubernetes.io/projected/4a2ae3e5-5039-4f5d-aabc-42e91338aaf9-kube-api-access-88n84\") pod \"4a2ae3e5-5039-4f5d-aabc-42e91338aaf9\" (UID: \"4a2ae3e5-5039-4f5d-aabc-42e91338aaf9\") " May 28 15:24:20 devenv kubelet[1422]: I0528 15:24:20.432881 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a2ae3e5-5039-4f5d-aabc-42e91338aaf9-kube-api-access-88n84" (OuterVolumeSpecName: "kube-api-access-88n84") pod "4a2ae3e5-5039-4f5d-aabc-42e91338aaf9" (UID: "4a2ae3e5-5039-4f5d-aabc-42e91338aaf9"). InnerVolumeSpecName "kube-api-access-88n84". PluginName "kubernetes.io/projected", VolumeGidValue "" May 28 15:24:20 devenv kubelet[1422]: I0528 15:24:20.531497 1422 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-88n84\" (UniqueName: \"kubernetes.io/projected/4a2ae3e5-5039-4f5d-aabc-42e91338aaf9-kube-api-access-88n84\") on node \"devenv\" DevicePath \"\"" May 28 15:24:21 devenv kubelet[1422]: I0528 15:24:21.218699 1422 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45359f27ba199bea6c55dd231f8845ad76ec4f898dc3c7005a32b4df31c48698" May 28 15:24:30 devenv kubelet[1422]: I0528 15:24:30.249317 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-gd46m" podStartSLOduration=17.583535883 podStartE2EDuration="27.249303025s" podCreationTimestamp="2024-05-28 15:24:03 +0000 UTC" firstStartedPulling="2024-05-28 15:24:20.162218177 +0000 UTC m=+30.350577582" lastFinishedPulling="2024-05-28 15:24:29.827985316 +0000 UTC m=+40.016344724" observedRunningTime="2024-05-28 15:24:30.248892421 +0000 UTC m=+40.437251841" watchObservedRunningTime="2024-05-28 15:24:30.249303025 +0000 UTC m=+40.437662438" ==> storage-provisioner [b5dff207488bde6712f888127b8d238d6d56a0b2dd938a446ea5c278b6c02190] <== I0528 15:24:03.575396 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0528 15:24:05.614693 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0528 15:24:05.614737 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0528 15:24:05.620836 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0528 15:24:05.620998 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_devenv_47ba866a-02ed-4987-80b4-4d6d6d7f2e4b! I0528 15:24:05.621000 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"faa1a253-22ee-4843-a280-f20645880108", APIVersion:"v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' devenv_47ba866a-02ed-4987-80b4-4d6d6d7f2e4b became leader I0528 15:24:05.721610 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_devenv_47ba866a-02ed-4987-80b4-4d6d6d7f2e4b!