Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rootless podman error kind create cluster: Generic error message doesn't apply #2684

Closed
caniko opened this issue Mar 19, 2022 · 9 comments
Closed
Labels
area/provider/podman Issues or PRs related to podman area/rootless Issues or PRs related to rootless containers kind/bug Categorizes issue or PR as related to a bug.

Comments

@caniko
Copy link

caniko commented Mar 19, 2022

What happened:
Cannot create a cluster when running kind craete cluster with podman (rootless). I get the following error + stack trace:

 ╭─can@Pyramidal in ~ took 1ms
[🔴] × kind create cluster -v 5
enabling experimental podman provider
ERROR: failed to create cluster: running kind with rootless provider requires setting systemd property "Delegate=yes", see https://kind.sigs.k8s.io/docs/user/rootless/
Stack Trace:
sigs.k8s.io/kind/pkg/errors.New
	sigs.k8s.io/kind/pkg/errors/errors.go:28
sigs.k8s.io/kind/pkg/cluster/internal/create.validateProvider
	sigs.k8s.io/kind/pkg/cluster/internal/create/create.go:253
sigs.k8s.io/kind/pkg/cluster/internal/create.Cluster
	sigs.k8s.io/kind/pkg/cluster/internal/create/create.go:70
sigs.k8s.io/kind/pkg/cluster.(*Provider).Create
	sigs.k8s.io/kind/pkg/cluster/provider.go:183
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.runE
	sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:80
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.NewCommand.func1
	sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:55
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:974
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:902
sigs.k8s.io/kind/cmd/kind/app.Run
	sigs.k8s.io/kind/cmd/kind/app/main.go:53
sigs.k8s.io/kind/cmd/kind/app.Main
	sigs.k8s.io/kind/cmd/kind/app/main.go:35
main.main
	sigs.k8s.io/kind/main.go:25
runtime.main
	runtime/proc.go:255
runtime.goexit
	runtime/asm_amd64.s:1581

What you expected to happen:
Create a cluster with podman driver. Everyting runs fine when I am on docker, which is what I used before.

How to reproduce it (as minimally and precisely as possible):
Unsure as the error message seems to be meant for those who haven't configured their machine properly. I have already followed the guide.

My delegate.conf is different from the guide, but it should suffice:

[Service]
Delegate=cpu cpuset io memory pids

Environment:

  • kind version: (use kind version): kind v0.12.0 go1.17.8 linux/amd64
  • Kubernetes version: (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"archive", BuildDate:"2022-03-17T21:14:47Z", GoVersion:"go1.18", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
  • Docker version: (use docker info):
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
WARN[0000]  binary not found, container dns will not be enabled
host:
  arch: amd64
  buildahVersion: 1.24.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: /usr/bin/conmon is owned by conmon 1:2.1.0-1
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: bdb4f6e56cd193d40b75ffc9725d4b74a18cb33c'
  cpus: 16
  distribution:
    distribution: garuda
    version: unknown
  eventLogger: journald
  hostname: Pyramidal
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.16.15-zen1-1-zen
  linkmode: dynamic
  logDriver: journald
  memFree: 981315584
  memTotal: 16770576384
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: /usr/bin/crun is owned by crun 1.4.3-1
    path: /usr/bin/crun
    version: |-
      crun version 1.4.3
      commit: 61c9600d1335127eba65632731e2d72bc3f0b9e8
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: /usr/bin/slirp4netns is owned by slirp4netns 1.1.12-1
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 25158475776
  swapTotal: 25158475776
  uptime: 3h 52m 29.11s (Approximately 0.12 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  docker.io:
    Blocked: false
    Insecure: false
    Location: docker.io
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: docker.io
  search:
  - docker.io
store:
  configFile: /home/can/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/can/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /run/user/1000/containers
  volumePath: /home/can/.local/share/containers/storage/volumes
version:
  APIVersion: 4.0.1
  Built: 1647171863
  BuiltTime: Sun Mar 13 12:44:23 2022
  GitCommit: c8b9a2e3ec3630e9172499e15205c11b823c8107
  GoVersion: go1.17.8
  OsArch: linux/amd64
  Version: 4.0.1
  • OS (e.g. from /etc/os-release):
NAME="Garuda Linux"
PRETTY_NAME="Garuda Linux"
ID=garuda
ID_LIKE=arch
BUILD_ID=rolling
ANSI_COLOR="38;2;23;147;209"
HOME_URL="https://garudalinux.org/"
DOCUMENTATION_URL="https://wiki.garudalinux.org/"
SUPPORT_URL="https://forum.garudalinux.org/"
BUG_REPORT_URL="https://gitlab.com/groups/garuda-linux/"
LOGO=garudalinux
@caniko caniko added the kind/bug Categorizes issue or PR as related to a bug. label Mar 19, 2022
@aojea
Copy link
Contributor

aojea commented Mar 19, 2022

Does it work if you set the delegate as in the guide?

@caniko
Copy link
Author

caniko commented Mar 20, 2022

Does it work if you set the delegate as in the guide?

Nope.

@caniko caniko changed the title Rootless podman error kind craete cluster: Invalid error message Rootless podman error kind create cluster: Invalid error message Mar 20, 2022
@caniko caniko changed the title Rootless podman error kind create cluster: Invalid error message Rootless podman error kind create cluster Mar 20, 2022
@caniko caniko changed the title Rootless podman error kind create cluster Rootless podman error kind create cluster: Generic error message doesn't apply Mar 20, 2022
@BenTheElder BenTheElder added area/provider/podman Issues or PRs related to podman area/rootless Issues or PRs related to rootless containers labels Mar 21, 2022
@ncdc
Copy link

ncdc commented Mar 29, 2022

I'm seeing different output from podman info if I'm remote (on my Mac) or local (inside podman machine ssh) (relevant portions displayed only):

Mac:

host:
  cgroupControllers:
  - memory
  - pids

Local:

host:
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids

It looks like because the remote version is missing cpu, kind isn't happy. podman 4.0.2 here.

@BenTheElder
Copy link
Member

BenTheElder commented Mar 29, 2022

The podman driver checks for necessary controllers to catch issues early, missing controllers in info seems like a podman bug.

podman support is experimental for this reason, docker has been far more stable to target and support.

@ncdc
Copy link

ncdc commented Mar 29, 2022

👍 will chat with podman folks

@caniko
Copy link
Author

caniko commented Mar 30, 2022

podman support is experimental for this reason, docker has been far more stable to target and support.

Podman is not my daily driver, yet. Helping out the community so I can make the switch one day :)

@BenTheElder
Copy link
Member

BenTheElder commented Mar 30, 2022

should also add: rootless is it's own still-fully-stabilizing fun 🙃, and at the moment we seem to see more rootless + podman than rootless + docker (probably due to current packaging?)

@caniko
Copy link
Author

caniko commented Mar 30, 2022

I honestly didn't know that rootless was a thing before I tried podman. The security benefits makes it appealing to me.

@BenTheElder
Copy link
Member

#2872 is newer but similar and has more detail. This issue was lost in the sea of github notifications and unclear follow up.

As discussed in #2872 I simply don't have time to reproduce all environments and currently the only thing that is clear on this bug is that the environment does not have suitable cgroup for running KIND / Kubernetes. A possible solution is mentioned in #2872 (comment)

#2684 (comment) which seems to have been containers/podman#13710 (comment) (thanks @ncdc for following up with the podman folks on that 🙏) and unrelated.

@caniko caniko closed this as completed Aug 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/podman Issues or PRs related to podman area/rootless Issues or PRs related to rootless containers kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

4 participants