Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd #5903

Closed
sonikbhoom opened this issue Apr 20, 2020 · 24 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@sonikbhoom
Copy link

sonikbhoom commented Apr 20, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Steps to reproduce the issue:

  1. Use podman-compose version 0.1.6dev (also tested against 0.1.7dev)

  2. run podman-compose against attached YAML
    $ podman-compose/podman_compose.py -t 1podfw -f docker-compose-podman.yml up -d

  3. Downgrade podman to 1.6.2 and the up command is successful

Describe the results you received:

$ podman-compose/podman_compose.py -t 1podfw -f docker-compose-podman.yml up -d 
podman pod create --name=docker --share net -p 1900:1900/udp -p 32413:32413/udp -p 32414:32414/udp -p 32410:32410/udp -p 8324:8324/tcp -p 32469:32469/tcp -p 3005:3005/tcp -p 32400:32400/tcp -p 32412:32412/udp
5c1d2396c81d3e84fb32119209998f62600c6246de85678873430985acf5669b
0
podman volume inspect docker_plexms_61d2c5338290839e067190b2927f1f0d || podman volume create docker_plexms_61d2c5338290839e067190b2927f1f0d
podman run --name=plexms -d --pod=docker --label io.podman.compose.config-hash=123 --label io.podman.compose.project=docker --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=plexms --network host -e TZ="America/Toronto" -e HOSTNAME="localhost plex" -e PLEX_CLAIM="<redacted>" -e PLEX_UID=1000 -e PLEX_GID=2002 -e ADVERTISE_IP="http://<redacted>:32400" --mount type=bind,source=/home/sonik/docker/./plexms/plexms-config,destination=/config,bind-propagation=z --mount type=bind,source=/home/sonik/docker/./plexms/plex_tmp,destination=/transcode,bind-propagation=z --mount type=bind,source=/home/sonik/docker/./plexms/shared,destination=/shared,bind-propagation=z --mount type=bind,source=/home/sonik/docker/./../Videos,destination=/media,bind-propagation=z --mount type=bind,source=/home/sonik/.local/share/containers/storage/volumes/docker_plexms_61d2c5338290839e067190b2927f1f0d/_data,destination=/run,bind-propagation=Z --add-host plexms:127.0.0.1 --add-host plexms:127.0.0.1 --privileged plexinc/pms-docker
Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd
125

Describe the results you expected:

$ podman-compose/podman_compose.py -t 1podfw -f docker-compose-podman.yml up -d 
podman pod create --name=docker --share net -p 32412:32412/udp -p 1900:1900/udp -p 8324:8324/tcp -p 32400:32400/tcp -p 32414:32414/udp -p 3005:3005/tcp -p 32410:32410/udp -p 32469:32469/tcp -p 32413:32413/udp
c5499607c15724aa8e9c0686cc19116597f34851d8b05a0bd968a8a1b190cac8
0
podman volume inspect docker_plexms_61d2c5338290839e067190b2927f1f0d || podman volume create docker_plexms_61d2c5338290839e067190b2927f1f0d
podman run --name=plexms -d --pod=docker --label io.podman.compose.config-hash=123 --label io.podman.compose.project=docker --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=plexms --network host -e TZ="America/Toronto" -e HOSTNAME="localhost plex" -e PLEX_CLAIM="<redacted>" -e PLEX_UID=1000 -e PLEX_GID=2002 -e ADVERTISE_IP="http://<redacted>:32400" --mount type=bind,source=/home/sonik/docker/./plexms/plexms-config,destination=/config,bind-propagation=z --mount type=bind,source=/home/sonik/docker/./plexms/plex_tmp,destination=/transcode,bind-propagation=z --mount type=bind,source=/home/sonik/docker/./plexms/shared,destination=/shared,bind-propagation=z --mount type=bind,source=/home/sonik/docker/./../Videos,destination=/media,bind-propagation=z --mount type=bind,source=/home/sonik/.local/share/containers/storage/volumes/docker_plexms_61d2c5338290839e067190b2927f1f0d/_data,destination=/run,bind-propagation=Z --add-host plexms:127.0.0.1 --add-host plexms:127.0.0.1 --privileged plexinc/pms-docker
7ed4223d66e23d235adcb69ca2f48cf56fb2312e255b9649dd7f68f94e761b62
0

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:            1.9.0
RemoteAPI Version:  1
Go Version:         go1.13.9
OS/Arch:            linux/amd64

Output of podman info --debug:

  compiler: gc
  gitCommit: ""
  goVersion: go1.13.9
  podmanVersion: 1.9.0
host:
  arch: amd64
  buildahVersion: 1.14.8
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.15-1.fc31.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.15, commit: 4152e6044da92e0c5f246e5adf14c85f41443759'
  cpus: 4
  distribution:
    distribution: fedora
    version: "31"
  eventLogger: journald
  hostname: darkdog3
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.5.17-200.fc31.x86_64
  memFree: 5990744064
  memTotal: 8324005888
  ociRuntime:
    name: crun
    package: crun-0.13-2.fc31.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.13
      commit: e79e4de4ac16da0ce48777afb72c6241de870525
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.0.0-1.fc31.x86_64
    version: |-
      slirp4netns version 1.0.0
      commit: a3be729152a33e692cd28b52f664defbf2e7810a
      libslirp: 4.1.0
  swapFree: 8480878592
  swapTotal: 8480878592
  uptime: 8h 15m 8.17s (Approximately 0.33 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/sonik/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-0.7.8-1.fc31.x86_64
      Version: |-
        fusermount3 version: 3.6.2
        fuse-overlayfs: version 0.7.8
        FUSE library version 3.6.2
        using FUSE kernel interface version 7.29
  graphRoot: /home/sonik/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 4
  runRoot: /run/user/1000
  volumePath: /home/sonik/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.9.0-1.fc31.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):

YAML file:

services:
  plexms:
    privileged: true
    container_name: plexms
    restart: unless-stopped
    image: plexinc/pms-docker
    volumes:
      - ./plexms/plexms-config:/config:z
      - ./plexms/plex_tmp:/transcode:z
      - ./plexms/shared:/shared:z
      - ./../Videos:/media:z
      - /run 
    ports:
      - "32400:32400/tcp"
      - "3005:3005/tcp"
      - "8324:8324/tcp"
      - "32469:32469/tcp"
      - "1900:1900/udp"
      - "32410:32410/udp"
      - "32412:32412/udp"
      - "32413:32413/udp"
      - "32414:32414/udp"
    environment:
      - TZ="America/Toronto"
      - HOSTNAME="localhost plex"
      - PLEX_CLAIM="<redacted>"
      - PLEX_UID=1000
      - PLEX_GID=2002
      - ADVERTISE_IP="http://<redacted>:32400"
    network_mode: "host"
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Apr 20, 2020
@hooksie1
Copy link
Contributor

Ran into this yesterday and took a while to figure out it was 1.9 that broke. Reverting versions made it work again.

@mheon
Copy link
Member

mheon commented Apr 20, 2020

@rhatdan PTAL - I suspect this is containers.conf related.

@mheon
Copy link
Member

mheon commented Apr 20, 2020

It sounds like it also reproduces without podman-compose involved

@hooksie1
Copy link
Contributor

It sounds like it also reproduces without podman-compose involved

Yeah

podman

@yannlawrency
Copy link

yannlawrency commented Apr 20, 2020

runs if the podman command is provided with the --cgroup-manger=systemd argument or run using sudo.

This isn't the solution, just an observation.

@sonikbhoom
Copy link
Author

runs if the podman command is provided with the --cgroup-manager=systemd argument or run using sudo.

This isn't the solution, just an observation.

if this is the case then perhaps I should open a bug with podman-compose

@mheon
Copy link
Member

mheon commented Apr 20, 2020

@rhatdan We may have swapped default cgroup manager for rootless from cgroupfs to systemd in 1.9

@hooksie1
Copy link
Contributor

Ah, my libpod.conf had cgroupfs set for cgroup_manager. It was set that way on my laptop and a box I run some containers on. I don't ever remember setting that. I only noticed from here: https://github.com/containers/libpod/blob/master/docs/source/markdown/podman.1.md

--cgroup-manager=manager

CGroup manager to use for container cgroups. Supported values are cgroupfs or systemd. Default is systemd unless overridden in the libpod.conf file.

@sonikbhoom
Copy link
Author

sonikbhoom commented Apr 21, 2020

switching ~/.config/containers/libpod.conf to systemd results in:

Error: error executing hook `/usr/libexec/oci/hooks.d/oci-systemd-hook` (exit code: 1): OCI runtime error

@rhatdan
Copy link
Member

rhatdan commented Apr 21, 2020

Could you just remove oci-systemd-hook and ~/.config/containers/libpod.conf

Not sure oci-systemd-hook would work with rootless and it is not needed by podman.

@rhatdan
Copy link
Member

rhatdan commented Apr 21, 2020

containers.conf replaces libpod.conf, we just use it for compatibility mode.

@mheon
Copy link
Member

mheon commented Apr 21, 2020

Can someone provide a sample libpod.conf that is causing them breakage, as well? That would help track down what's causing breaks.

@mtorromeo
Copy link

In my case, modifying cgroup_manager = "cgroupfs" to cgroup_manager = "systemd" in ~/.config/containers/libpod.conf fixed the issue.

@mtorromeo
Copy link

Maybe related: I'm on kernel 5.6.5 with cgroup_no_v1="all"

@sonikbhoom
Copy link
Author

$ cat ~/.config/containers/libpod.conf.BAK 
volume_path = "/home/sonik/.local/share/containers/storage/volumes"
image_default_transport = "docker://"
runtime = "/usr/bin/crun"
runtime_supports_json = ["runc"]
conmon_path = ["/usr/libexec/podman/conmon", "/usr/local/libexec/podman/conmon", "/usr/local/lib/podman/conmon", "/usr/bin/conmon", "/usr/sbin/conmon", "/usr/local/bin/conmon", "/usr/local/sbin/conmon", "/run/current-system/sw/bin/conmon"]
conmon_env_vars = ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"]
cgroup_manager = "cgroupfs"
init_path = ""
static_dir = "/home/sonik/.local/share/containers/storage/libpod"
tmp_dir = "/run/user/1000/libpod/tmp"
max_log_size = -1
no_pivot_root = false
cni_config_dir = "/etc/cni/net.d/"
cni_plugin_dir = ["/usr/libexec/cni", "/usr/lib/cni", "/usr/local/lib/cni", "/opt/cni/bin"]
infra_image = "k8s.gcr.io/pause:3.1"
infra_command = "/pause"
enable_port_reservation = true
label = true
network_cmd_path = ""
num_locks = 2048
lock_type = "shm"
events_logger = "journald"
events_logfile_path = ""
detach_keys = "ctrl-p,ctrl-q"
SDNotify = false
cgroup_check = true

[runtimes]
  crun = ["/usr/bin/crun", "/usr/local/bin/crun"]
  runc = ["/usr/bin/runc", "/usr/sbin/runc", "/usr/local/bin/runc", "/usr/local/sbin/runc", "/sbin/runc", "/bin/runc", "/usr/lib/cri-o-runc/sbin/runc", "/run/current-system/sw/bin/runc"]

@sonikbhoom
Copy link
Author

sonikbhoom commented Apr 21, 2020

Could you just remove oci-systemd-hook and ~/.config/containers/libpod.conf

Not sure oci-systemd-hook would work with rootless and it is not needed by podman.

$ mv ~/.config/containers/libpod.conf ~/.config/containers/libpod.conf.BAK
$ sudo yum remove oci-systemd-hook-0.2.0-2.git05e6923.fc31.x86_64

Updating cgroup_manager: cgroup_manager = "cgroupfs" in /usr/share/containers/libpod.conf and performing the two above steps allows the container to start.

Should I close the issue?

@mheon
Copy link
Member

mheon commented Apr 21, 2020

cgroup_manager = "cgroupfs" in libpod.conf seems to be causing the problem, but only in 1.9...

Can someone try keeping cgroup_manager as cgroupfs but changing events_logger to file?

@sonikbhoom
Copy link
Author

sonikbhoom commented Apr 21, 2020

cgroup_manager = "cgroupfs" in libpod.conf seems to be causing the problem, but only in 1.9...

Can someone try keeping cgroup_manager as cgroupfs but changing events_logger to file?

EDIT: I didn't add a log file location (oops) :P

got the same error from podman run:

Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd

too be clear I set cgroup_manager = "cgroupfs" in /usr/share/containers/libpod.conf as well.

@jwflory
Copy link
Contributor

jwflory commented Apr 21, 2020

I can confirm this with Podman v1.9.0 on Fedora 31. Setting cgroup_manager = "systemd" in ~/.config/containers/libpod.conf fixed Podman for me and my containers started up again.

@rhatdan:
containers.conf replaces libpod.conf, we just use it for compatibility mode.

Is the containers.conf file used in v1.9.0? I searched my Fedora 31 system and couldn't find this file.

$ rpm -ql podman | grep ".conf"
/etc/cni/net.d/87-podman-bridge.conflist
/usr/lib/tmpfiles.d/podman.conf
/usr/share/containers/libpod.conf
/usr/share/man/man5/containers-mounts.conf.5.gz
/usr/share/man/man5/libpod.conf.5.gz

@rhatdan
Copy link
Member

rhatdan commented Apr 22, 2020

We have not released it yet, It will be in Fedora 32 shipped by containers-common. We were supposed to be the same defaults. The problem was in that we had bugs in older versions of podman that forced "file" on events-logger, We are working to fix this in podman-1.9.1. But for now you can work around it by using containers.conf.

@avikivity
Copy link

I am seeing this too, even with --cgroups-manager=systemd.

podman-1.9.1-1.fc32.x86_64

@avikivity
Copy link

podman system migrate and podman system reset did not fix it.

@mheon
Copy link
Member

mheon commented May 4, 2020

Can you open a fresh issue? This should have been resolved with v1.9.1

@avikivity
Copy link

#6084

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

9 participants