Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[podman container clone] the memory swappiness tuning doesn't play well on cgroup2 system #13916

Closed
chuanchang opened this issue Apr 19, 2022 · 6 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@chuanchang
Copy link
Contributor

chuanchang commented Apr 19, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

The memory swappiness tuning doesn' t support on cgroup2 system

  • but the running container is cloned successfully with --memory-swappiness option on cgroup2 system
  • and without any error raising on cgroup2 system when cloning container with invalue value of memory swappiness

Steps to reproduce the issue:

  1. $ ./bin/podman run --name mycnt1 -d quay.io/libpod/alpine sleep 99999

  2. $ ./bin/podman container clone --memory-swappiness=101 mycnt1 --name mycnt1-clone

Describe the results you received:
[ajia@Fedora35 podman]$ grep cgroup /proc/mounts
cgroup2 /sys/fs/cgroup cgroup2 rw,seclabel,nosuid,nodev,noexec,relatime 0 0

[ajia@Fedora35 podman]$ ./bin/podman run --name mycnt1 -d quay.io/libpod/alpine sleep 99999
16e5d62424cffbc9df7f6dd11742b78d06e55274ca856f910623189fbce681c0

[ajia@Fedora35 podman]$ ./bin/podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
16e5d62424cf quay.io/libpod/alpine:latest sleep 99999 4 seconds ago Up 4 seconds ago mycnt1

[ajia@Fedora35 podman]$ ./bin/podman container clone --memory-swappiness=101 mycnt1 --name mycnt1-clone
ac217d7481b0f935b63a3fe8f650953c4c115c618f7c22ef775575208a4c0997

NOTE: without any error output like "Error: invalid value: 101, valid memory swappiness range is 0-100"

[ajia@Fedora35 podman]$ ./bin/podman start mycnt1-clone
mycnt1-clone

NOTE: the container can be staring successfully w/o error like "Error: invalid value: 101, valid memory swappiness range is 0-100" too.

[ajia@Fedora35 podman]$ ./bin/podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac217d7481b0 quay.io/libpod/alpine:latest sleep 99999 53 minutes ago Up 52 minutes ago mycnt1-clone

Describe the results you expected:

  • the memory swappiness tuning doesn't support on cgroup2 system, so we should get error like this "Error: OCI runtime error: crun: cannot set memory swappiness with cgroupv2" when cloning container on cgroup2 system.
  • the valid value range is 0-100 for memory swappiness, and the container cloning should be failed if a invalid value is assigned to memory swappiness on cgroup2 system like cgroup1 system.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

[ajia@Fedora35 podman]$ ./bin/podman version
Client:       Podman Engine
Version:      4.0.0-dev
API Version:  4.0.0-dev
Go Version:   go1.16.14
Git Commit:   d6f47e692bc694d3ec4f3505acaccf7fa0b73231-dirty
Built:        Tue Apr 19 18:14:13 2022
OS/Arch:      linux/amd64

Output of podman info --debug:

[ajia@Fedora35 podman]$ ./bin/podman info --debug
host:
  arch: amd64
  buildahVersion: 1.26.0-dev
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-2.fc35.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: '
  cpus: 8
  distribution:
    distribution: fedora
    variant: workstation
    version: "35"
  eventLogger: journald
  hostname: Fedora35
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.16.11-200.fc35.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 1165221888
  memTotal: 16651931648
  networkBackend: cni
  ociRuntime:
    name: crun
    package: crun-1.4.2-1.fc35.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.4.2
      commit: f6fbc8f840df1a414f31a60953ae514fa497c748
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc35.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 320h 50m 34.54s (Approximately 13.33 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/ajia/.config/containers/storage.conf
  containerStore:
    number: 27
    paused: 0
    running: 4
    stopped: 23
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/ajia/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 14
  runRoot: /run/user/1000/containers
  volumePath: /home/ajia/.local/share/containers/storage/volumes
version:
  APIVersion: 4.0.0-dev
  Built: 1650363253
  BuiltTime: Tue Apr 19 18:14:13 2022
  GitCommit: d6f47e692bc694d3ec4f3505acaccf7fa0b73231-dirty
  GoVersion: go1.16.14
  Os: linux
  OsArch: linux/amd64
  Version: 4.0.0-dev

Package info (e.g. output of rpm -q podman or apt list podman):

[ajia@Fedora35 podman]$ git rev-parse HEAD
d6f47e692bc694d3ec4f3505acaccf7fa0b73231

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.): physical

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Apr 19, 2022
@cdoern
Copy link
Contributor

cdoern commented Apr 19, 2022

I will add a check for cgroupsv2 and return an error.

cdoern pushed a commit to cdoern/podman that referenced this issue Apr 19, 2022
@cdoern
Copy link
Contributor

cdoern commented Apr 19, 2022

Oh I see, this is only a check that happens when a container is run @chuanchang if you start the container it fails so I will add a clone specific check

@cdoern
Copy link
Contributor

cdoern commented Apr 19, 2022

actually I am unsure if this is a clone bug... @rhatdan container create also does this until you start it. Should I add a check for the entire creation path not just clone?

@cdoern
Copy link
Contributor

cdoern commented Apr 19, 2022

@chuanchang discussed this with the team and it is the intended behavior that cgroups related configs are validated at runtime.

@cdoern cdoern closed this as completed Apr 19, 2022
@chuanchang
Copy link
Contributor Author

chuanchang commented Apr 19, 2022

@chuanchang discussed this with the team and it is the intended behavior that cgroups related configs are validated at runtime.

It works well on cgroupv1 system, but an expected error can't be raised on cgroupv2 system even though the cloned contaier is started later, please see details at above "Describe the results you received:" section, I will list it here again.

[ajia@Fedora35 podman]$ ./bin/podman container clone --memory-swappiness=101 mycnt1 --name mycnt1-clone
ac217d7481b0f935b63a3fe8f650953c4c115c618f7c22ef775575208a4c0997

NOTE: without any error output like "Error: invalid value: 101, valid memory swappiness range is 0-100"

[ajia@Fedora35 podman]$ ./bin/podman start mycnt1-clone
mycnt1-clone

NOTE: the container can be started successfully w/o error like "Error: invalid value: 101, valid memory swappiness range is 0-100" too.

And also can't get an execpted error like this "Error: OCI runtime error: crun: cannot set memory swappiness with cgroupv2" on cgroupv2 system after starting cloned container. 

@DocMAX
Copy link

DocMAX commented Feb 24, 2023

Cant start my container anymore. What can i do?

docmax@zeus: ~ $ podman start portainer
Error: OCI runtime error: unable to start container "5d7093c04d835a5c093d7be245198070189d689bc0725dc4d7e7286fcf5e3f8c": crun: cannot set memory swappiness with cgroupv2

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Aug 31, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 31, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

3 participants