Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error downloading kic artifacts #14664

Closed
iapicca opened this issue Jul 28, 2022 · 18 comments
Closed

Error downloading kic artifacts #14664

iapicca opened this issue Jul 28, 2022 · 18 comments
Labels
co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/macos priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@iapicca
Copy link

iapicca commented Jul 28, 2022

What Happened?

steps to reproduce

  • install podman an minikube
brew install minikube && \
brew install podman && \
  • initialize an start podman machine
podman machine init --cpus 2 --memory 2048 --disk-size 20 && \
podman machine start
  • launch minikube
 minikube start --driver=podman --container-runtime=cri-o

note

see issue #8426
closed at the time of writing, but still active
cc @medyagh

Attach the log file

I got an error running minikube logs --file=log.txt too,
please see the logs below

logs
~ minikube start --driver=podman --container-runtime=cri-o

😄  minikube v1.26.0 on Darwin 12.5 (arm64)
✨  Using the podman (experimental) driver based on user configuration
📌  Using rootless Podman driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
    > gcr.io/k8s-minikube/kicbase: 347.17 MiB / 347.17 MiB  100.00% 994.77 KiB
E0728 13:14:52.529962    8931 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔥  Creating podman container (CPUs=2, Memory=1956MB) ...
🎁  Preparing Kubernetes v1.24.1 on CRI-O 1.22.5 ...
❌  Unable to load cached images: loading cached images: CRI-O load /var/lib/minikube/images/kube-scheduler_v1.24.1: crio load image: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.1: Process exited with status 125
stdout:

stderr:
Getting image source signatures
Copying blob sha256:5306e7faf8268aaedf84b04cf4c418b33d4969bcea13e27c8717f62c13d31ddb
Copying blob sha256:798afb9dcee7e7c858b6f109d8bb3ea6d10081493703a6b77b46d388c38aa8f7
Copying blob sha256:88768122a4ad689aed8daafaa8f3a3877cd1df861c753d5456382c8635db0540
Error: payload does not match any of the supported image formats (oci, oci-archive, dir, docker-archive)

    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 42.50 MiB / 42.50 MiB [-------------] 100.00% 868.60 KiB p/s 50s
    > kubeadm: 41.38 MiB / 41.38 MiB [-----------] 100.00% 443.87 KiB p/s 1m36s
    > kubelet: 107.50 MiB / 107.50 MiB [---------] 100.00% 819.78 KiB p/s 2m14s
💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.18.13-200.fc36.aarch64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_PIDS: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_FAIR_GROUP_SCHED: enabled
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled
CONFIG_CFS_BANDWIDTH: enabled
CONFIG_CGROUP_HUGETLB: not set - Required for hugetlb cgroup.
CONFIG_SECCOMP: enabled
CONFIG_SECCOMP_FILTER: enabled
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: missing
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: missing
CGROUPS_BLKIO: missing
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
W0728 10:20:40.925851    1643 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: missing optional cgroups: hugetlb blkio
	[WARNING SystemVerification]: missing required cgroups: cpuset
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.24.1: output: time="2022-07-28T10:22:45Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:941012a58e853f459b4a6d213c5111d63a8ab7fe3304b674e01b68f2ff711668\": Error processing tar file(exit status 1): time=\"2022-07-28T10:22:45Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.24.1: output: time="2022-07-28T10:24:31Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:8f1bb484a5bdbc9f272a925739ec9c8e2531e99cbbeb50839d40dbe5d76c4525\": Error processing tar file(exit status 1): time=\"2022-07-28T10:24:31Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.24.1: output: time="2022-07-28T10:25:32Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:fc9023a4184c8fcc87922134bedae831ef48feb26d368413324d8c2f20d7c71a\": Error processing tar file(exit status 1): time=\"2022-07-28T10:25:31Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.18.13-200.fc36.aarch64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_PIDS: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_FAIR_GROUP_SCHED: enabled
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled
CONFIG_CFS_BANDWIDTH: enabled
CONFIG_CGROUP_HUGETLB: not set - Required for hugetlb cgroup.
CONFIG_SECCOMP: enabled
CONFIG_SECCOMP_FILTER: enabled
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: missing
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: missing
CGROUPS_BLKIO: missing
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
W0728 10:25:32.604331    2228 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: missing optional cgroups: hugetlb blkio
	[WARNING SystemVerification]: missing required cgroups: cpuset
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.24.1: output: time="2022-07-28T10:27:42Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:941012a58e853f459b4a6d213c5111d63a8ab7fe3304b674e01b68f2ff711668\": Error processing tar file(exit status 1): time=\"2022-07-28T10:27:41Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.24.1: output: time="2022-07-28T10:29:50Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:8f1bb484a5bdbc9f272a925739ec9c8e2531e99cbbeb50839d40dbe5d76c4525\": Error processing tar file(exit status 1): time=\"2022-07-28T10:29:50Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.24.1: output: time="2022-07-28T10:30:59Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:fc9023a4184c8fcc87922134bedae831ef48feb26d368413324d8c2f20d7c71a\": Error processing tar file(exit status 1): time=\"2022-07-28T10:30:59Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

❌  Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.18.13-200.fc36.aarch64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_PIDS: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_FAIR_GROUP_SCHED: enabled
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled
CONFIG_CFS_BANDWIDTH: enabled
CONFIG_CGROUP_HUGETLB: not set - Required for hugetlb cgroup.
CONFIG_SECCOMP: enabled
CONFIG_SECCOMP_FILTER: enabled
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: missing
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: missing
CGROUPS_BLKIO: missing
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
W0728 10:25:32.604331    2228 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: missing optional cgroups: hugetlb blkio
	[WARNING SystemVerification]: missing required cgroups: cpuset
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.24.1: output: time="2022-07-28T10:27:42Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:941012a58e853f459b4a6d213c5111d63a8ab7fe3304b674e01b68f2ff711668\": Error processing tar file(exit status 1): time=\"2022-07-28T10:27:41Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.24.1: output: time="2022-07-28T10:29:50Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:8f1bb484a5bdbc9f272a925739ec9c8e2531e99cbbeb50839d40dbe5d76c4525\": Error processing tar file(exit status 1): time=\"2022-07-28T10:29:50Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.24.1: output: time="2022-07-28T10:30:59Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:fc9023a4184c8fcc87922134bedae831ef48feb26d368413324d8c2f20d7c71a\": Error processing tar file(exit status 1): time=\"2022-07-28T10:30:59Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯~ minikube logs --file=logs.txt
E0728 13:32:15.129839    9421 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

❗  unable to fetch logs for: describe nodes
➜  ~ neofetch
                    'c.          [email protected]
                 ,xNMM.          ---------------------------------------
               .OMMMMo           OS: macOS 12.5 21G72 arm64
               OMMM0,            Host: MacBookAir10,1
     .;loddo:' loolloddol;.      Kernel: 21.6.0
   cKMMMMMMMMMMNWMMMMMMMMMM0:    Uptime: 5 hours, 54 mins
 .KMMMMMMMMMMMMMMMMMMMMMMMWd.    Packages: 48 (brew)
 XMMMMMMMMMMMMMMMMMMMMMMMX.      Shell: zsh 5.8.1
;MMMMMMMMMMMMMMMMMMMMMMMM:       Resolution: 1440x900
:MMMMMMMMMMMMMMMMMMMMMMMM:       DE: Aqua
.MMMMMMMMMMMMMMMMMMMMMMMMX.      WM: Quartz Compositor
 kMMMMMMMMMMMMMMMMMMMMMMMMWd.    WM Theme: Blue (Dark)
 .XMMMMMMMMMMMMMMMMMMMMMMMMMMk   Terminal: iTerm2
  .XMMMMMMMMMMMMMMMMMMMMMMMMK.   Terminal Font: Monaco 12
    kMMMMMMMMMMMMMMMMMMMMMMd     CPU: Apple M1
     ;KMMMMMMMWXXWMMMMMMMk.      GPU: Apple M1
       .cooc,.    .,coo:.        Memory: 1522MiB / 8192MiB





➜  ~ podman --version
podman version 4.1.1

Operating System

macOS (Default)

Driver

Podman

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jul 28, 2022

Loading images from cache is just a bonus, it is supposed to be able to pull them from the registry otherwise.

  1. preload
  2. cache
  3. registry

Can you do something simple like podman pull k8s.gcr.io/pause:3.7 ? It seems to be failing, inside crio...

@afbjorklund afbjorklund added co/podman-driver podman driver issues os/macos co/runtime/crio CRIO related issues labels Jul 28, 2022
@afbjorklund
Copy link
Collaborator

afbjorklund commented Jul 28, 2022

Actually I have no idea if it works with rootless podman, previously it was recommended to use the regular:

podman system connection default podman-machine-default-root

https://minikube.sigs.k8s.io/docs/drivers/podman/

Apparently rootless podman only works with containerd

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jul 28, 2022

There is like three experimental things at one time here:

  1. Podman Desktop (machine)
  2. Rootless podman (driver)
  3. Rootless cri-o (runtime)

@iapicca
Copy link
Author

iapicca commented Jul 28, 2022

@afbjorklund

Loading images from cache is just a bonus, it is supposed to be able to pull them from the registry otherwise.

  1. preload
  2. cache
  3. registry

Can you do something simple like podman pull k8s.gcr.io/pause:3.7 ? It seems to be failing, inside crio...

~ podman pull k8s.gcr.io/pause:3.7
Trying to pull k8s.gcr.io/pause:3.7...
Getting image source signatures
Copying blob sha256:aff472d3f83edbbc738d035ea53108fcb1e10564aaf0c8d3d6576a02cc2a5679
Copying blob sha256:aff472d3f83edbbc738d035ea53108fcb1e10564aaf0c8d3d6576a02cc2a5679
Copying config sha256:e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550
Writing manifest to image destination
Storing signatures
e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550

Actually I have no idea if it works with rootless podman, previously it was recommended to use the regular:

podman machine init --cpus 2 --memory 2048 --disk-size 20 --rootful

https://minikube.sigs.k8s.io/docs/drivers/podman/

Apparently rootless podman only works with containerd

There is like three experimental things at one time here:

  1. Podman Desktop (machine)
  2. Rootless podman (driver)
  3. Rootless cri-o (runtime)

I'm trying to narrow down the issue

I deleted an create a new podman machine, rootful this time
and try to run various "flavours" indicated here
no luck... and "worse"(?) error

rootful podman
podman machine init --cpus 2 --memory 2048 --disk-size 20 --rootful
rootful
 minikube start --driver=podman

😄  minikube v1.26.0 on Darwin 12.5 (arm64)
✨  Using the podman (experimental) driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E0728 16:13:54.027788   20879 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔄  Restarting existing podman container for "minikube" ...
🤦  StartHost failed, but will try again: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"

🔄  Restarting existing podman container for "minikube" ...
😿  Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


❌  Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: podman container inspect -f  minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
rootful with cri-o
 minikube start --driver=podman --container-runtime=cri-o

😄  minikube v1.26.0 on Darwin 12.5 (arm64)
✨  Using the podman (experimental) driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E0728 16:11:08.909266   20809 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔄  Restarting existing podman container for "minikube" ...
🤦  StartHost failed, but will try again: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"

🔄  Restarting existing podman container for "minikube" ...
😿  Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


❌  Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: podman container inspect -f  minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

rootful with containerd
~ minikube start --driver=podman --container-runtime=containerd

😄  minikube v1.26.0 on Darwin 12.5 (arm64)
✨  Using the podman (experimental) driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.24.1 preload ...
    > preloaded-images-k8s-v18-v1...: 411.49 MiB / 411.49 MiB  100.00% 2.87 MiB
E0728 16:17:24.815749   20940 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔄  Restarting existing podman container for "minikube" ...
🤦  StartHost failed, but will try again: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"

🔄  Restarting existing podman container for "minikube" ...
😿  Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


❌  Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: podman container inspect -f  minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
cannot generate logs
~ minikube logs --file=logs.txt

❌  Exiting due to GUEST_STATUS: state: unknown state "minikube": podman container inspect minikube --format=: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                       │
│    😿  If the above advice does not help, please let us know:                                                         │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                                                       │
│                                                                                                                       │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    Please also attach the following file to the GitHub issue:                                                         │
│    - /var/folders/9v/4dpzzrw56m1glmbj5zl6xlvm0000gn/T/minikube_logs_8f6474a291f68fa61b92987d0579232b5754d600_0.log    │
│                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

it seems to expect the minikube container to be there already, but isn't

➜  ~ podman ps -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

I've homebrew uninstall and reinstalled the whole thing, but nothing has changed

@afbjorklund
Copy link
Collaborator

Actually I meant the results from the sudo podman inside the cluster, but see how it can be hard to test.
The old cluster needs to be deleted, before these big changes take effect (now it just says "restarting")

This setup/combination is supposed to work: Podman Engine (linux), rootful podman, rootful cri-o
But it is not tested on a regular basis (unlike Docker), so it is possible that it doesn't work right now...

Not sure what the status with the preload/cache was, I think it was left as half-implemented or something ?

@giuseppeingoglia
Copy link

giuseppeingoglia commented Sep 16, 2022

I have the same running podman in archlinux.

E0916 20:41:26.765749 4830 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426

But is seems to run just fine beside the error:

~ » minikube profile list
|---------|-----------|---------|--------------|------|---------|---------|-------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes |
|---------|-----------|---------|--------------|------|---------|---------|-------|
| test | podman | crio | 192.168.58.2 | 8443 | v1.23.3 | Running | 1 |
|---------|-----------|---------|--------------|------|---------|---------|-------|

the same if I start with containerd or docker

|---------|-----------|------------|--------------|------|---------|---------|-------|

Profile VM Driver Runtime IP Port Version Status Nodes
test podman crio 192.168.58.2 8443 v1.23.3 Running 1
test2 podman docker 192.168.67.2 8443 v1.23.3 Running 1
test3 podman containerd 192.168.76.2 8443 v1.23.3 Running 1
--------- ----------- ------------ -------------- ------ --------- --------- -------

@afbjorklund
Copy link
Collaborator

afbjorklund commented Sep 17, 2022

But is seems to run just fine beside the error

I think the error just means that it failed to load the cached image, and will start to pull it from the network again.

The runtime doesn't matter (crio), only the driver engine (podman). It's very similar, but different levels/caches...

~/.minikube/cache/kic/

~/.minikube/cache/images/


Actually the cache is also the same for all container runtimes, the runtime only matters for the "preload":

~/.minikube/cache/preloaded-tarball/

It can be forced to only use the cache, with the --preload=false option. Then it will also cache binaries.

~/.minikube/cache/linux/

@afbjorklund
Copy link
Collaborator

afbjorklund commented Sep 17, 2022

Typical cache contents: (with preload true+false)

~/.minikube/cache/
├── images
│   └── amd64
│       ├── gcr.io
│       │   └── k8s-minikube
│       │       └── storage-provisioner_v5
│       └── registry.k8s.io
│           ├── coredns
│           │   └── coredns_v1.9.3
│           ├── etcd_3.5.4-0
│           ├── kube-apiserver_v1.25.0
│           ├── kube-controller-manager_v1.25.0
│           ├── kube-proxy_v1.25.0
│           ├── kube-scheduler_v1.25.0
│           └── pause_3.8
├── kic
│   └── amd64
│       └── kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
├── linux
│   └── amd64
│       └── v1.25.0
│           ├── kubeadm
│           ├── kubectl
│           └── kubelet
└── preloaded-tarball
    ├── preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4
    └── preloaded-images-k8s-v18-v1.25.0-docker-overlay2-amd64.tar.lz4.checksum

12 directories, 14 files
$ docker images gcr.io/k8s-minikube/kicbase
REPOSITORY                    TAG       IMAGE ID       CREATED       SIZE
gcr.io/k8s-minikube/kicbase   v0.0.34   5f58fddaff43   2 weeks ago   1.14GB
$ minikube ssh docker images
REPOSITORY                                TAG       IMAGE ID       CREATED         SIZE
registry.k8s.io/kube-apiserver            v1.25.0   4d2edfd10d3e   3 weeks ago     128MB
registry.k8s.io/kube-scheduler            v1.25.0   bef2cf311509   3 weeks ago     50.6MB
registry.k8s.io/kube-controller-manager   v1.25.0   1a54c86c03a6   3 weeks ago     117MB
registry.k8s.io/kube-proxy                v1.25.0   58a9a0c6d96f   3 weeks ago     61.7MB
registry.k8s.io/pause                     3.8       4873874c08ef   3 months ago    711kB
registry.k8s.io/etcd                      3.5.4-0   a8a176a5d5d6   3 months ago    300MB
registry.k8s.io/coredns/coredns           v1.9.3    5185b96f0bec   3 months ago    48.8MB
k8s.gcr.io/pause                          3.6       6270bb605e12   12 months ago   683kB
gcr.io/k8s-minikube/storage-provisioner   v5        6e38f40d628d   17 months ago   31.5MB

The k8s.gcr.io/pause:3.6 is due to a bug.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Sep 17, 2022

The workaround would be to use --download-only, load the image into podman, and then call the start again.

$ minikube start --driver=podman --download-only
$ minikube start --help | grep kicbase
    --base-image='gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c':
$ sudo podman load <~/.minikube/cache/kic/amd64/kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
Loaded image(s): gcr.io/k8s-minikube/kicbase:v0.0.34
$ minikube start --driver=podman

Or fix the minikube code to be less docker-centric, which is unfortunately a feature inherited from go-containerregistry.

(if it matters, the cache images are stored using the equivalent of crane pull which uses .tar with .tar.gz inside)

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Sep 17, 2022
@iapicca
Copy link
Author

iapicca commented Oct 3, 2022

sorry for the silence @afbjorklund

I manage to run minikube w/ podman both rootless and rootful
following @chicks-net 's steps (thanks m8)

setup
 brew install minikube
 brew install podman
 podman machine init --cpus 2
~ minikube version
minikube version: v1.27.0
commit: 4243041b7a72319b9be7842a7d34b6767bbdac2b
~ podman version
Client:       Podman Engine
Version:      4.2.1
API Version:  4.2.1
Go Version:   go1.18.6
Built:        Tue Sep  6 22:16:02 2022
OS/Arch:      darwin/arm64

Server:       Podman Engine
Version:      4.2.0
API Version:  4.2.0
Go Version:   go1.18.4
Built:        Thu Aug 11 17:43:11 2022
OS/Arch:      linux/arm64
                    'c.          [email protected]
                 ,xNMM.          ---------------------------------------
               .OMMMMo           OS: macOS 12.6 21G115 arm64
               OMMM0,            Host: MacBookAir10,1
     .;loddo:' loolloddol;.      Kernel: 21.6.0
   cKMMMMMMMMMMNWMMMMMMMMMM0:    Uptime: 3 hours, 12 mins
 .KMMMMMMMMMMMMMMMMMMMMMMMWd.    Packages: 54 (brew)
 XMMMMMMMMMMMMMMMMMMMMMMMX.      Shell: zsh 5.8.1
;MMMMMMMMMMMMMMMMMMMMMMMM:       Resolution: 1440x900
:MMMMMMMMMMMMMMMMMMMMMMMM:       DE: Aqua
.MMMMMMMMMMMMMMMMMMMMMMMMX.      WM: Quartz Compositor
 kMMMMMMMMMMMMMMMMMMMMMMMMWd.    WM Theme: Blue (Dark)
 .XMMMMMMMMMMMMMMMMMMMMMMMMMMk   Terminal: iTerm2
  .XMMMMMMMMMMMMMMMMMMMMMMMMK.   Terminal Font: Monaco 12
    kMMMMMMMMMMMMMMMMMMMMMMd     CPU: Apple M1
     ;KMMMMMMMWXXWMMMMMMMk.      GPU: Apple M1
       .cooc,.    .,coo:.        Memory: 1082MiB / 8192MiB


rootful
 podman machine set --rootful
 podman system connection default podman-machine-default-root
 podman machine start
 minikube start --driver=podman --container-runtime=cri-o
~ minikube start --driver=podman --container-runtime=cri-o
😄  minikube v1.27.0 on Darwin 12.6 (arm64)
❗  Kubernetes 1.25.0 has a known issue with resolv.conf. minikube is using a workaround that should work for most use cases.
❗  For more information, see: https://github.com/kubernetes/kubernetes/issues/112135
✨  Using the podman (experimental) driver based on user configuration
📌  Using Podman driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.25.0 preload ...
    > preloaded-images-k8s-v18-v1...:  340.75 MiB / 340.75 MiB  100.00% 1.31 Mi
    > gcr.io/k8s-minikube/kicbase:  348.00 MiB / 348.00 MiB  100.00% 1.01 MiB p
E1003 15:51:21.375384   27482 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔥  Creating podman container (CPUs=2, Memory=1956MB) ...
🎁  Preparing Kubernetes v1.25.0 on CRI-O 1.24.2 ...
E1003 15:54:12.003380   27482 start.go:129] Unable to get host IP: RoutableHostIPFromInside is currently only implemented for linux
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
rootless
 podman system connection default podman-machine-default
 minikube config set rootless true
 podman machine start
 minikube start --driver=podman --container-runtime=containerd
~ minikube start --driver=podman --container-runtime=containerd

😄  minikube v1.27.0 on Darwin 12.6 (arm64)
    ▪ MINIKUBE_ROOTLESS=true
❗  Kubernetes 1.25.0 has a known issue with resolv.conf. minikube is using a workaround that should work for most use cases.
❗  For more information, see: https://github.com/kubernetes/kubernetes/issues/112135
✨  Using the podman (experimental) driver based on user configuration
📌  Using rootless Podman driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.25.0 preload ...
    > preloaded-images-k8s-v18-v1...:  340.35 MiB / 340.35 MiB  100.00% 2.32 Mi
E1003 16:33:58.875113   34042 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔥  Creating podman container (CPUs=2, Memory=1956MB) ...
📦  Preparing Kubernetes v1.25.0 on containerd 1.6.8 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

in both cases I get
E1003 15:51:21.375384 27482 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426
I'm not sure if it's relevant or not at this point

I did not try your workaround :(
but I think that the issue could be closed if it's just an error message w/o consequences

@iapicca
Copy link
Author

iapicca commented Nov 18, 2022

@afbjorklund
I tried your workaround but seems that minikube is looking for the file locally

minikube_setup git:(master) ✗  minikube start --download-only
😄  minikube v1.28.0 on Darwin 13.0.1 (arm64)
    ▪ MINIKUBE_ROOTLESS=true
✨  Using the podman (experimental) driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
    > kubectl.sha256:  64 B / 64 B [-------------------------] 100.00% ? p/s 0s
    > kubectl:  47.38 MiB / 47.38 MiB [-------------] 100.00% 6.31 MiB p/s 7.7s
✅  Download complete!minikube_setup git:(master) ✗ minikube start --help | grep kicbase
    --base-image='gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c':

    --base-image='gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456':
zsh: no such file or directory: --base-image=gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c:minikube_setup git:(master) ✗ minikube start --help | grep kicbase
    --base-image='gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c'

    --base-image='gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456':
zsh: no such file or directory: --base-image=gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c

@staticdev
Copy link
Contributor

I am having same problems on Linux.. podman driver does not work rootless (tried without container-runtime, with cri-o and containerd).

@iapicca
Copy link
Author

iapicca commented Dec 4, 2022

somehow after 1'000'000 attempts I managed,
I think the download part somehow worked
and even if throws error in the end minikube works

plus I added this in my .zshrc (macos)

export DOCKER_HOST=unix:///run/podman/podman.sock
export DOCKER_HOST=unix:///var/run/nerdctl.sock
~ podman machine start
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users/yakforward:/Users/yakforward

This machine is currently configured in rootless mode. If your containers
require root permissions (e.g. ports < 1024), or if you run into compatibility
issues with non-podman clients, you can switch using the following command:

	podman machine set --rootful

API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.

Machine "podman-machine-default" started successfully~ podman images
REPOSITORY                   TAG         IMAGE ID      CREATED      SIZE
<none>                       <none>      4117dd375c44  2 days ago   547 MB
<none>                       <none>      3f6ea93abaea  2 days ago   547 MB
docker.io/library/dart       stable      51825d64cf05  8 days ago   516 MB
gcr.io/k8s-minikube/kicbase  v0.0.36     c87ac1e75807  5 weeks ago  1.03 GB~ minikube start
😄  minikube v1.28.0 on Darwin 13.0.1 (arm64)
    ▪ MINIKUBE_ROOTLESS=true
✨  Using the podman (experimental) driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E1204 22:40:46.007070   29249 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔄  Restarting existing podman container for "minikube" ...
📦  Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    ▪ Using image docker.io/kubernetesui/dashboard:v2.7.0
    ▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
💡  Some dashboard features require the metrics-server addon. To enable all features please run:

	minikube addons enable metrics-server


🌟  Enabled addons: storage-provisioner, default-storageclass, dashboard
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default~ minikube dashboard

🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening http://127.0.0.1:56160/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...

but without knowing exactly how to make it work, I won't call the issue fixed

@avermeer
Copy link

Hello, I had this issue on my Linux box... until I figured out it was caused by a disk quota issue.

Indeed minicube downloads bunch of things in $HOME/.minibube and in $HOME/.kube "invisible" directories.

To by-pass my quota issue (our home directories are hosted by a NetApp with company-wide enforced quotas), I just did that:

mkdir -p /home/data/lotsOfSpaceHere
export $HOME=/home/data/lotsOfSpaceHere

then re-ran:
minicube start

=> it worked fine!

Of course that assumes that there's some room on a local disk/volume to create.

Baseline: minikube should corretcly warn about disk space / quotas exhaustion, I've seen couple of similar reports with unrelated solutions (network stuff etc), so it would be great if dev team could improve disk space-based reporting/output to help beginners

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 22, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 21, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/macos priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants