Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running nested inside docker starts failing with Ubuntu Jammy: writing file /sys/fs/cgroup/buildah-<random>/cgroup.procs: Operation not supported #14884

Closed
ianw opened this issue Jul 11, 2022 · 17 comments · Fixed by #14904
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@ianw
Copy link
Contributor

ianw commented Jul 11, 2022

In our CI, we run an application (nodepool/diskimage-builder) in container under Docker that then, nested inside this continaer, runs podman (we use it to extract the root image of containers that we then modify with diskimage-builder). Our CI recently updated from an Ubuntu Focal distribution to Ubuntu Jammy and this started failing.

The container the app runs in (under Docker) is Debian; it then runs podman 3.4.7+ds1-3+b1. This has not changed.

When running on Ubuntu Focal host, this works [1]. Our CI saves quite a bit of info about the host but perhaps the most interesting thing is kernel5.4.0-121-generic.

A change switched only the base-os to Ubuntu Jammy and it started failing to run podman with [2]

2022-07-06 00:16:37.594 | + podman build -t dib-tmp-work-image-22460 -f /usr/local/lib/python3.10/site-packages/diskimage_builder/elements/fedora-container/containerfiles/36 /opt/dib/cache/containerfile
2022-07-06 00:16:37.708 | STEP 1/2: FROM docker.io/library/fedora:36
2022-07-06 00:16:37.710 | Trying to pull docker.io/library/fedora:36...
2022-07-06 00:16:38.305 | Getting image source signatures
2022-07-06 00:16:38.306 | Copying blob sha256:e1deda52ffad5c9c8e3b7151625b679af50d6459630f4bf0fbf49e161dba4e88
2022-07-06 00:16:38.430 | Copying blob sha256:e1deda52ffad5c9c8e3b7151625b679af50d6459630f4bf0fbf49e161dba4e88
2022-07-06 00:16:42.150 | Copying config sha256:98ffdbffd20736862c8955419ef7db69849d715131717697007c3e51f22915a5
2022-07-06 00:16:42.153 | Writing manifest to image destination
2022-07-06 00:16:42.153 | Storing signatures
2022-07-06 00:16:42.186 | STEP 2/2: RUN dnf install -y findutils util-linux
2022-07-06 00:16:42.397 | error running container: error from /usr/bin/crun creating container for [/bin/sh -c dnf install -y findutils util-linux]: writing file `/sys/fs/cgroup/buildah-buildah444253232/cgroup.procs`: Operation not supported
2022-07-06 00:16:42.397 | : exit status 1
2022-07-06 00:16:42.407 | Error: error building at STEP "RUN dnf install -y findutils util-linux": error while running runtime: exit status 1

This runs 5.15.0-40-generic. Both of these are running docker version=20.10.17; i.e. afaict the only difference here is the host distribution.

#12559 feels similar?

[1] https://zuul.opendev.org/t/openstack/build/ffce49bb9ee04d3aa66d852792e4d747/logs
[2] https://zuul.opendev.org/t/openstack/build/53e3e8a9468b471896ec5be0718e4f02

@giuseppe
Copy link
Member

the issue could be that it started using cgroup v2.

I think this is a dup of #12559. You need to make sure Podman is not running in the root cgroup

@ianw
Copy link
Contributor Author

ianw commented Jul 11, 2022

@giuseppe sorry for being dense, my first rodeo with this particular bit of cgroup v2 -- is that what the commands at #12559 (comment) do?

@giuseppe
Copy link
Member

Thanks. Could you please show me how your cgroup configuration looks like now inside the container?

@ianw
Copy link
Contributor Author

ianw commented Jul 11, 2022

We don't do any explicit setup of cgroups inside the container; it simply calls "podman" (behind a lot of substitution variables [1])?

I can get the CI to report arbitrary things before it tries this; e.g. I'm running https://review.opendev.org/c/openstack/diskimage-builder/+/849274/2/diskimage_builder/elements/containerfile/root.d/08-containerfile now. I'll put results when I get them. I'll also try to setup an ad-hoc manual replication environment tomorrow which makes things a bit faster.

Another thing I could try is running the nested podman as root; this is a privileged container. Not sure if that is a bug or a feature? :) [2]

[1] https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/containerfile/root.d/08-containerfile#L91
[2] update this didn't help; https://zuul.opendev.org/t/openstack/build/b6f1e88817204d12bbb235bb4f6f5eb3/logs

@ianw
Copy link
Contributor Author

ianw commented Jul 11, 2022

Hopefully the following gives some clues (from inside the container that wants to run podman on a failing Focal based CI)

022-07-11 12:20:10.889 | + ls -l /sys/fs/cgroup/
2022-07-11 12:20:10.891 | total 0
2022-07-11 12:20:10.891 | -r--r--r-- 1 root root 0 Jul 11 12:20 cgroup.controllers
2022-07-11 12:20:10.891 | -r--r--r-- 1 root root 0 Jul 11 12:20 cgroup.events
2022-07-11 12:20:10.891 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cgroup.freeze
2022-07-11 12:20:10.891 | --w------- 1 root root 0 Jul 11 12:20 cgroup.kill
2022-07-11 12:20:10.891 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cgroup.max.depth
2022-07-11 12:20:10.891 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cgroup.max.descendants
2022-07-11 12:20:10.891 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cgroup.procs
2022-07-11 12:20:10.891 | -r--r--r-- 1 root root 0 Jul 11 12:20 cgroup.stat
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cgroup.subtree_control
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cgroup.threads
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cgroup.type
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cpu.idle
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cpu.max
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cpu.max.burst
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cpu.pressure
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 cpu.stat
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cpu.uclamp.max
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cpu.uclamp.min
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cpu.weight
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cpu.weight.nice
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cpuset.cpus
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 cpuset.cpus.effective
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cpuset.cpus.partition
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 cpuset.mems
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 cpuset.mems.effective
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 hugetlb.1GB.current
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 hugetlb.1GB.events
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 hugetlb.1GB.events.local
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 hugetlb.1GB.max
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 hugetlb.1GB.rsvd.current
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 hugetlb.1GB.rsvd.max
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 hugetlb.2MB.current
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 hugetlb.2MB.events
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 hugetlb.2MB.events.local
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 hugetlb.2MB.max
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 hugetlb.2MB.rsvd.current
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 hugetlb.2MB.rsvd.max
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 io.max
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 io.pressure
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 io.prio.class
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 io.stat
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 io.weight
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 memory.current
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 memory.events
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 memory.events.local
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 memory.high
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 memory.low
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 memory.max
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 memory.min
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 memory.numa_stat
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 memory.oom.group
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 memory.pressure
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 memory.stat
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 memory.swap.current
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 memory.swap.events
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 memory.swap.high
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 memory.swap.max
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 misc.current
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 misc.max
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 pids.current
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 pids.events
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 pids.max
2022-07-11 12:20:10.892 | -r--r--r-- 1 root root 0 Jul 11 12:20 rdma.current
2022-07-11 12:20:10.892 | -rw-r--r-- 1 root root 0 Jul 11 12:20 rdma.max
2022-07-11 12:20:10.893 | + cat /sys/fs/cgroup/cgroup.procs
2022-07-11 12:20:10.894 | 1
2022-07-11 12:20:10.894 | 7
2022-07-11 12:20:10.894 | 8
2022-07-11 12:20:10.894 | 23
2022-07-11 12:20:10.894 | 33
2022-07-11 12:20:10.894 | 34
2022-07-11 12:20:10.894 | 628
2022-07-11 12:20:10.894 | 639
2022-07-11 12:20:10.894 | 678
2022-07-11 12:20:10.894 | 685
2022-07-11 12:20:10.894 | + mount
2022-07-11 12:20:10.903 | overlay on / type overlay ... < a ton of overlay mounts >
2022-07-11 12:20:10.903 | proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
2022-07-11 12:20:10.903 | tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755,inode64)
2022-07-11 12:20:10.903 | devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
2022-07-11 12:20:10.903 | sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
2022-07-11 12:20:10.903 | cgroup on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
2022-07-11 12:20:10.903 | mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
2022-07-11 12:20:10.903 | shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k,inode64)
2022-07-11 12:20:10.903 | /dev/xvda1 on /etc/openstack type ext4 (ro,relatime)
2022-07-11 12:20:10.903 | /dev/xvda1 on /home/zuul type ext4 (rw,relatime)
2022-07-11 12:20:10.903 | /dev/xvda1 on /opt/dib type ext4 (rw,relatime)
2022-07-11 12:20:10.903 | /dev/xvda1 on /etc/nodepool type ext4 (ro,relatime)
2022-07-11 12:20:10.903 | /dev/xvda1 on /etc/hosts type ext4 (rw,relatime)
2022-07-11 12:20:10.903 | /dev/xvda1 on /etc/resolv.conf type ext4 (rw,relatime)
2022-07-11 12:20:10.903 | /dev/xvda1 on /etc/hostname type ext4 (rw,relatime)
2022-07-11 12:20:10.903 | /dev/xvda1 on /var/lib/containers type ext4 (rw,relatime)
2022-07-11 12:20:10.903 | /dev/xvda1 on /opt/zookeeper/ca type ext4 (ro,relatime)
2022-07-11 12:20:10.903 | /dev/xvda1 on /opt/stack/data type ext4 (ro,relatime)
2022-07-11 12:20:10.903 | tmpfs on /opt/stack/data/etcd type tmpfs (rw,nosuid,nodev,relatime,size=524288k,inode64)
2022-07-11 12:20:10.903 | /dev/xvda1 on /var/log/nodepool type ext4 (rw,relatime)

@giuseppe
Copy link
Member

I don't see any sub-cgroup, that means all processes are running in the root cgroup.

With such setup, you first need to move the processes to a new sub-cgroup. Have you tried the fix suggested here:
#12559 (comment) ?

@ianw
Copy link
Contributor Author

ianw commented Jul 12, 2022

@giuseppe hrm, I guess I got the same thing as the reporter in #12559 (comment)

2022-07-12 01:45:29.967 | + cat /sys/fs/cgroup/cgroup.procs
2022-07-12 01:45:29.968 | 1
2022-07-12 01:45:29.968 | 7
2022-07-12 01:45:29.968 | 8
2022-07-12 01:45:29.968 | 23
2022-07-12 01:45:29.968 | 33
2022-07-12 01:45:29.968 | 34
2022-07-12 01:45:29.968 | 628
2022-07-12 01:45:29.968 | 639
2022-07-12 01:45:29.968 | 678
2022-07-12 01:45:29.968 | 685
2022-07-12 01:45:29.968 | + sudo mkdir /sys/fs/cgroup/init
2022-07-12 01:45:29.986 | + sudo cat /sys/fs/cgroup/cgroup.procs
2022-07-12 01:45:29.986 | + sudo tee /sys/fs/cgroup/init/cgroup.procs
2022-07-12 01:45:30.000 | 1
2022-07-12 01:45:30.000 | 7
2022-07-12 01:45:30.000 | 8
2022-07-12 01:45:30.000 | 23
2022-07-12 01:45:30.000 | 33
2022-07-12 01:45:30.000 | 34
2022-07-12 01:45:30.000 | 628
2022-07-12 01:45:30.000 | 639
2022-07-12 01:45:30.000 | 678
2022-07-12 01:45:30.000 | 688
2022-07-12 01:45:30.000 | 689
2022-07-12 01:45:30.000 | 690
2022-07-12 01:45:30.000 | 691
2022-07-12 01:45:30.000 | tee: /sys/fs/cgroup/init/cgroup.procs: Invalid argument

This is happening inside the container. Also, I tested running the nested podman as root with sudo and that gave the same error...

@giuseppe
Copy link
Member

after you run that command, is the content of /sys/fs/cgroup/cgroup.procs left unchanged?

I'll see if I can fix it in Podman and perhaps let it automatically create a sub-cgroup when running in a container, but before that, I'd need to know more about the entrypoint for your container. Do you run directly Podman or something else?

@giuseppe
Copy link
Member

opened a PR: #14904

@ianw
Copy link
Contributor Author

ianw commented Jul 12, 2022

opened a PR: #14904

thank you; i'll need a little time but should be able to test this

@ianw
Copy link
Contributor Author

ianw commented Jul 12, 2022

after you run that command, is the content of /sys/fs/cgroup/cgroup.procs left unchanged?

Unfortunately I didn't capture the output after; I can if this is still relevant

I'll see if I can fix it in Podman and perhaps let it automatically create a sub-cgroup when running in a container, but before that, I'd need to know more about the entrypoint for your container. Do you run directly Podman or something else?

For reference; this runs forked from a daemon process. Actually the python daemon forks the diskimge-builder program, which then runs a shell-script, which then calls podman as one of its steps. So, no, it doesn't directly run podman :)

@giuseppe
Copy link
Member

For reference; this runs forked from a daemon process. Actually the python daemon forks the diskimge-builder program, which then runs a shell-script, which then calls podman as one of its steps. So, no, it doesn't directly run podman :)

so I am afraid my PR won't be enough. You need to make sure these programs run into a separate sub-cgroup.

Have you considered running systemd in the container?

@ianw
Copy link
Contributor Author

ianw commented Jul 12, 2022

Have you considered running systemd in the container?

Well it has never come up before :) Would I be on the right path in the forked shell-script that starts podman, making a cgroup and running podman with cgexec?

@giuseppe
Copy link
Member

you need to run the container entrypoint itself in a new cgroup. There should not be any process left running in the root cgroup

openstack-mirroring pushed a commit to openstack/diskimage-builder that referenced this issue Jul 12, 2022
This is a squash of two changes that have unfortunately simultaneously
broken the gate.

The functests are failing with

 sha256sum: bionic-server-cloudimg-amd64.squashfs.manifest: No such file or directory

I think what has happened here is that the SHA256 sums file being used
has got a new entry "bionic-server-cloudimg-amd64.squashfs.manifest"
which is showing up in a grep for
"bionic-server-cloudimg-amd64.squashfs".  sha256 then tries to also
check this hash, and has started failing.

To avoid this, add an EOL marker to the grep so it only matches the
exact filename.

Change I7fb585bc5ccc52803eea107e76dddf5e9fde8646 updated the
containerfile tests to Jammy and it seems that cgroups v2 prevents
podman running inside docker [1].  While we investigate, move this
testing back to focal.

[1] containers/podman#14884
Change-Id: I1af9f5599168aadc1e7fcdfae281935e6211a597
giuseppe added a commit to giuseppe/libpod that referenced this issue Jul 13, 2022
if podman is running in the root cgroup, it will create a new
subcgroup and move itself there.

[NO NEW TESTS NEEDED] it needs nested podman

Closes: containers#14884

Signed-off-by: Giuseppe Scrivano <[email protected]>
@ianw
Copy link
Contributor Author

ianw commented Jul 15, 2022

@giuseppe I tried pulling master just to double check, but I still hit the same issue, so I think that giuseppe@e3419c0 doesn't fix this particular use case? So I wonder if this really is closed by that...

One thing from the prior comment #12559 (comment) is that cat <...procs...> | tee <... cgroup.procs ...> doesn't work as you need to put the pids into the cgroup one by one.

So what I came up with is https://review.opendev.org/c/zuul/nodepool/+/849273/4/Dockerfile

CMD sudo mkdir /sys/fs/cgroup/init && \
    for p in `cat /sys/fs/cgroup/cgroup.procs`; do echo $p | sudo tee /sys/fs/cgroup/init/cgroup.procs || true; done && \
    <run daemon>

This seems to work in our CI [1]. However, I have to convince myself and two project reviewers that this is not a terrible hack. From podman's perspective, is this pretty much what is required to ensure running under our daemon inside a docker container? Are there any other suggestions? Thanks

[1] https://zuul.opendev.org/t/openstack/build/eb5d6f2b2fe9448f8e0ae8cce6b500c6

@giuseppe
Copy link
Member

not sure if terrible, but it is still a hack :)

If you need such a complex configuration inside a container, maybe you should consider running systemd

mgagne pushed a commit to mgagne/diskimage-builder that referenced this issue Jul 19, 2022
This is a squash of two changes that have unfortunately simultaneously
broken the gate.

The functests are failing with

 sha256sum: bionic-server-cloudimg-amd64.squashfs.manifest: No such file or directory

I think what has happened here is that the SHA256 sums file being used
has got a new entry "bionic-server-cloudimg-amd64.squashfs.manifest"
which is showing up in a grep for
"bionic-server-cloudimg-amd64.squashfs".  sha256 then tries to also
check this hash, and has started failing.

To avoid this, add an EOL marker to the grep so it only matches the
exact filename.

Change I7fb585bc5ccc52803eea107e76dddf5e9fde8646 updated the
containerfile tests to Jammy and it seems that cgroups v2 prevents
podman running inside docker [1].  While we investigate, move this
testing back to focal.

[1] containers/podman#14884
Change-Id: I1af9f5599168aadc1e7fcdfae281935e6211a597

(cherry picked from commit 78d3895)
mheon pushed a commit to mheon/libpod that referenced this issue Jul 26, 2022
if podman is running in the root cgroup, it will create a new
subcgroup and move itself there.

[NO NEW TESTS NEEDED] it needs nested podman

Closes: containers#14884

Signed-off-by: Giuseppe Scrivano <[email protected]>
@sonman
Copy link

sonman commented Nov 7, 2022

@giuseppe I tried pulling master just to double check, but I still hit the same issue, so I think that giuseppe@e3419c0 doesn't fix this particular use case? So I wonder if this really is closed by that...

One thing from the prior comment #12559 (comment) is that cat <...procs...> | tee <... cgroup.procs ...> doesn't work as you need to put the pids into the cgroup one by one.

So what I came up with is https://review.opendev.org/c/zuul/nodepool/+/849273/4/Dockerfile

CMD sudo mkdir /sys/fs/cgroup/init && \
    for p in `cat /sys/fs/cgroup/cgroup.procs`; do echo $p | sudo tee /sys/fs/cgroup/init/cgroup.procs || true; done && \
    <run daemon>

This seems to work in our CI [1]. However, I have to convince myself and two project reviewers that this is not a terrible hack. From podman's perspective, is this pretty much what is required to ensure running under our daemon inside a docker container? Are there any other suggestions? Thanks

[1] https://zuul.opendev.org/t/openstack/build/eb5d6f2b2fe9448f8e0ae8cce6b500c6

You may add --cgroupns=host to your docker config in jammy.

westphahl pushed a commit to westphahl/nodepool that referenced this issue May 26, 2023
Per the comments in

 containers/podman#14884

there is basically no way to run podman nested in the container in a
cgroups v2 environment (e.g. Ubuntu Jammy) with the processes in the
same context the container starts in.

One option is to run systemd in the container, which puts things in
separate slices, etc.  This is unappealing.

This takes what I think is the simplest approach which is to check if
we're under cgroups v2 and move everything into a new group before
nodepool-builder starts.

The referenced change tests this by running the containerfile elements
on Jammy.

Neded-By: https://review.opendev.org/c/openstack/diskimage-builder/+/849274

Change-Id: Ie663d01d77e17f560a92887cba1e2c86b421b24d
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 11, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 11, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants