-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podman run failure on cgroupsv1 rootless related to --pids-limit #6834
Comments
This chunk of code seems to have handled this before: Per @mheon
|
I'll take a look after lunch, see if I can't chase this down. |
This seems to check at the kernel level Looks like it just sees if the pids controller is mounted (in sysInfo) not if we actually have the ability to write to the pids controller for the cgroup we're going to be a part of. I don't see any rootless checks in this either |
DefaultPidsLimit was introduced in containers/common |
Is this related to #6734? |
I don't think so. I think this is specific to us trying to mimic the system PID limit. |
Alright, this one is honestly kind of bizarre. It worked fine when I built directly from master. In order to confirm code was running, I added a single |
I only have 1 cgroups v1 host setup to run rootless containers so it's hard for me to narrow down what's causing it... |
@goochjj Show, if this shows as above (default 2048) then this is the issue. On a cgroups V1 system this should show no default. |
@rhatdan It's set to 0, but having it set to anything is the problem. |
This is the code that sets the default // PidsLimit returns the default maximum number of pids to use in containers
|
If you inspect the container does it show ulimits? |
So look at this, my crun-debug grabs the config.json and dumps it in my XDG_RUNTIME_DIR:
vs:
I think crun only looks for a resources/pids tree, not that it's actually >0 |
Yeah, crun just checks that the resources->pids structure is specified (non null), not that resources->pids->limit >0 or resources->pids->limit_present |
#6837 should fix it. |
However, in some cases (unset limits), we can completely remove the limit and avoid errors. This works around a bug where the Podman frontend is setting a Pids limit of 0 on some rootless systems. For now, this is only implemented for the PID limit. It can easily be extended to other resource limits, but it is a fair bit of code to do so, so I leave that exercise to someone else. Fixes containers#6834 Signed-off-by: Matthew Heon <[email protected]>
I prefer my fix. |
A friendly reminder that this issue had no activity for 30 days. |
@goochjj If the latest version does not fix this issue, could reopen it. |
This started appearing for me on a setup that used to work. Problem started after a $ podman run -it --rm --pids-limit 2000 docker.io/fedora:33
Error: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: process_linux.go:422: setting cgroup config for procHooks process caused: cannot set pids limit: container could not join or create cgroup: OCI runtime error
|
From dnf.log:
|
Could you verify that libpod.conf and containers.conf are setup correctly. This is an old bug that has been fixed for many months. |
containers.conf is vanilla: $ locate containers.conf
/usr/share/containers/containers.conf
/usr/share/man/man5/containers.conf.5.gz
$ rpm -qf /usr/share/containers/containers.conf
containers-common-1.1.1-3.module_el8.3.0+475+c50ce30b.aarch64
$ rpm -q --verify containers-common-1.1.1-3.module_el8.3.0+475+c50ce30b.aarch64 There is no libpod.conf on this system. |
no, we don't support any kind of limit on cgroup v1 with rootless |
Well, it worked with 1.6.4 and stopped working with 2.0.5. Can you suggest a workaround? That system is useless now. |
do you have any file under A possible workaround is to override |
No.
I get the same error. Or do you mean drop --pids-limit and rely on the value from config? That's unfortunate for me because it means I have to detect versions and have my scripts behave differently depending on what they find (and require the user to adjust config, but for this machine the user is me, so I don't mind). |
In podman 1.6 perhaps we were ignoring the flag. But the pids limit was not being set for rootless containers. |
So I'll have to add a version check. From which version can I rely on --pids-limit to work? |
I suspect 2.0.0 and up - there was an extensive rewrite of container creation to enable the new HTTP API, and this looks like a consequence of it. |
This failure is with 2.0.5. |
I'm 99% sure the new behavior here is correct - cgroups control is required for the PIDs limit to be set, and cgroups v1 + rootless does not have sufficient access to cgroups to set said limit. I strongly suspect v1.9 and lower just ignored the limit, and now we've begun passing it down to the OCI runtime which errors on the limit being set. If you require a rootless PIDs limit be set, a cgroups v2 system would be a requirement. @giuseppe Can you confirm? |
Just to make sure this is not a container.conf issue, could you try to run the podman command with CONTAINERS_CONF=/dev/null podman ... |
$ CONTAINERS_CONF=/dev/null podman run -it --rm --pids-limit 2000 docker.io/fedora:33
Error: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: process_linux.go:422: setting cgroup config for procHooks process caused: cannot set pids limit: container could not join or create cgroup: OCI runtime error |
Ok, nevermind, I missed the fact that you are specifying the --pids-limit. Which as has been pointed out, will not work on cgroups v2 in rootless mode. |
/kind bug
Description
pids-limit is getting set to 2048 as a default, which is fine unless you're rootless on cgroups v1, at which point you can't create anything without explicitly setting --pids-limit 0.
Steps to reproduce the issue:
Describe the results you expected:
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):origin/master
The text was updated successfully, but these errors were encountered: