Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to run rootful podman on Arch #2873

Closed
caniko opened this issue Aug 10, 2022 · 4 comments
Closed

Fail to run rootful podman on Arch #2873

caniko opened this issue Aug 10, 2022 · 4 comments
Labels
area/provider/podman Issues or PRs related to podman kind/bug Categorizes issue or PR as related to a bug.

Comments

@caniko
Copy link

caniko commented Aug 10, 2022

What happened

Fail to run kind with rootful podman using @maciekmm's method. Rootless stil doesn't work for me, the guide on the docs is really not meant for Arch, I already posted my opinion on this here.

Rootful: To be specific

The following fail:
Running sudo kind create cluster
Running sudo KIND_EXPERIMENTAL_CONTAINERD_SNAPSHOTTER=fuse-overlayfs kind create cluster
Running sudo KIND_EXPERIMENTAL_CONTAINERD_SNAPSHOTTER=native kind create cluster

Output every time:

enabling experimental podman provider
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 
 ✓ Preparing nodes 📦  
panic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
sigs.k8s.io/kind/pkg/cluster/internal/providers/podman.getSubnets({0x77d9fb?, 0x7a6280?})
        sigs.k8s.io/kind/pkg/cluster/internal/providers/podman/provision.go:275 +0x199
sigs.k8s.io/kind/pkg/cluster/internal/providers/podman.getProxyEnv(0x77db16?, {0x77d9fb, 0x4})
        sigs.k8s.io/kind/pkg/cluster/internal/providers/podman/provision.go:249 +0x74
sigs.k8s.io/kind/pkg/cluster/internal/providers/podman.commonArgs(0xc00021a000, {0x77d9fb, 0x4})
        sigs.k8s.io/kind/pkg/cluster/internal/providers/podman/provision.go:137 +0x2c5
sigs.k8s.io/kind/pkg/cluster/internal/providers/podman.planCreation(0xc00021a000, {0x77d9fb, 0x4})
        sigs.k8s.io/kind/pkg/cluster/internal/providers/podman/provision.go:40 +0x78
sigs.k8s.io/kind/pkg/cluster/internal/providers/podman.(*provider).Provision(0xc00020a018, 0xc0001d0000, 0xc00021a000)
        sigs.k8s.io/kind/pkg/cluster/internal/providers/podman/provider.go:94 +0x1f0
sigs.k8s.io/kind/pkg/cluster/internal/create.Cluster({0x802548, 0xc0000113b0?}, {0x803230?, 0xc00020a018}, 0xc000218000)
        sigs.k8s.io/kind/pkg/cluster/internal/create/create.go:101 +0x36c
sigs.k8s.io/kind/pkg/cluster.(*Provider).Create(0xc000070680, {0x0, 0x0}, {0xc000135be0, 0x7, 0xc000120901?})
        sigs.k8s.io/kind/pkg/cluster/provider.go:182 +0xa5
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.runE({0x802548?, 0xc0000113b0}, {{0x7ff640, 0xc00000e010}, {0x7ff680, 0xc00000e018}, {0x7ff680, 0xc00000e020}}, 0xc00007c460)
        sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:80 +0x408
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.NewCommand.func1(0xc000161400?, {0xa0e958?, 0x0?, 0x0?})
        sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:55 +0x72
github.com/spf13/cobra.(*Command).execute(0xc000161400, {0xa0e958, 0x0, 0x0})
        github.com/spf13/[email protected]/command.go:856 +0x67c
github.com/spf13/cobra.(*Command).ExecuteC(0xc000160000)
        github.com/spf13/[email protected]/command.go:974 +0x3b4
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/[email protected]/command.go:902
sigs.k8s.io/kind/cmd/kind/app.Run({0x802548, 0xc0000113b0}, {{0x7ff640, 0xc00000e010}, {0x7ff680, 0xc00000e018}, {0x7ff680, 0xc00000e020}}, {0xc0000100a0, 0x2, ...})
        sigs.k8s.io/kind/cmd/kind/app/main.go:53 +0x145
sigs.k8s.io/kind/cmd/kind/app.Main()
        sigs.k8s.io/kind/cmd/kind/app/main.go:35 +0xe7
main.main()
        sigs.k8s.io/kind/main.go:25 +0x17

Running with --retain had no effect, no output was generated.

sudo podman run hello-world runs fine.

What you expected to happen:
I'd expect the kind create cluster to successfully create a cluster.

How to reproduce it (as minimally and precisely as possible):
On Garuda Linux on the linux-lts or linux-zen kernel run kind create cluster

Anything else we need to know?:

Environment:

  • kind version: (use kind version): kind v0.14.0 go1.18.2 linux/amd64
  • Kubernetes version: (use kubectl version):
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"archive", BuildDate:"2022-08-08T17:09:48Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
The connection to the server localhost:8080 was refused - did you specify the right host or port?
  • OS (e.g. from /etc/os-release): Garuda ~= Arch
  • Docker version: (use docker info -> podman info):
host:
  arch: amd64
  buildahVersion: 1.26.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: /usr/bin/conmon is owned by conmon 1:2.1.3-1
    path: /usr/bin/conmon
    version: 'conmon version 2.1.3, commit: ab52a597278b20173440140cd810dc9fa8785c93'
  cpuUtilization:
    idlePercent: 97.54
    systemPercent: 0.61
    userPercent: 1.86
  cpus: 16
  distribution:
    distribution: garuda
    version: unknown
  eventLogger: journald
  hostname: Stellate
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.18.16-zen1-1-zen
  linkmode: dynamic
  logDriver: journald
  memFree: 9140314112
  memTotal: 33606750208
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: /usr/bin/crun is owned by crun 1.5-1
    path: /usr/bin/crun
    version: |-
      crun version 1.5
      commit: 54ebb8ca8bf7e6ddae2eb919f5b82d1d96863dea
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.0-1
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 50590638080
  swapTotal: 50592735232
  uptime: 54m 21.49s
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/can/.config/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 0
    stopped: 4
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/can/.local/share/containers/storage
  graphRootAllocated: 293601280000
  graphRootUsed: 239687503872
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 26
  runRoot: /run/user/1000/containers
  volumePath: /home/can/.local/share/containers/storage/volumes
version:
  APIVersion: 4.1.1
  Built: 1659559968
  BuiltTime: Wed Aug  3 22:52:48 2022
  GitCommit: f73d8f8875c2be7cd2049094c29aff90b1150241-dirty
  GoVersion: go1.19
  Os: linux
  OsArch: linux/amd64
  Version: 4.1.1
@caniko caniko added the kind/bug Categorizes issue or PR as related to a bug. label Aug 10, 2022
@BenTheElder
Copy link
Member

BenTheElder commented Aug 10, 2022

Fail to run kind with rootful podman using @maciekmm's #2868 (comment). Rootless stil doesn't work for me, the guide on the docs is really not meant for Arch, I already posted my opinion on this #2872 (comment).

This failure is unrelated.

panic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
sigs.k8s.io/kind/pkg/cluster/internal/providers/podman.getSubnets({0x77d9fb?, 0x7a6280?})

This means we failed to get the network with podman.

Likely, podman is either misconfigured or made a breaking change to network information 🤔
We've seen both before.

Running with --retain had no effect, no output was generated.

$ kind create cluster --help | grep retain
      --retain              retain nodes for debugging when cluster creation fails

We didn't get to the point of creating nodes because obtaining the kind network information from podman failed.

func getSubnets(networkName string) ([]string, error) {
// TODO: unmarshall json and get rid of this complex query
format := `{{ range (index (index (index (index . "plugins") 0 ) "ipam" ) "ranges")}}{{ index ( index . 0 ) "subnet" }} {{end}}`
cmd := exec.Command("podman", "network", "inspect", "-f", format, networkName)
lines, err := exec.OutputLines(cmd)
if err != nil {
return nil, errors.Wrap(err, "failed to get subnets")
}

@BenTheElder BenTheElder added the area/provider/podman Issues or PRs related to podman label Aug 10, 2022
@caniko
Copy link
Author

caniko commented Aug 11, 2022

@BenTheElder, I think something is wrong with my system specifically. Troubleshooting tips?

@BenTheElder
Copy link
Member

podman network inspect kind would be a good starting point, to see what the network actually is and start to figure out why there's no ipam assigned.

Recalling #2821, on Fedora podman update broke existing networks due to removing CNI binaries even though the packaging previously configured podman to use them ... could be something similar on Arch.

@caniko
Copy link
Author

caniko commented Aug 16, 2022

I am using docker for now much easier, apologies for this waste of time

@caniko caniko closed this as completed Aug 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/podman Issues or PRs related to podman kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants