-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rootless: Podman can't start systemd container on Ubuntu 20.04 #8545
Comments
Did this work on previous Podman versions? Can you include the output of |
And finally, does it work if you use |
I haven't tested this yet. Will try tomorrow.
Yes:
|
If you stop the container and recreated it without --systemd=always, does it work now? |
This is working fine for me on F33 with podman 2.2. |
I would like the output of |
Yes it works, but not reproducible. It seems to randomly fail afterwards at least.
I used a Fedora image now, in a clean Ubuntu 20.04 VM.
[
{
"Id": "12e5de73717c3cfc33835d62ca0c8569eedd805c80e069a25c5e4a5312bee96f",
"Created": "2020-12-02T18:43:15.570984377+01:00",
"Path": "/usr/sbin/init",
"Args": [
"/usr/sbin/init"
],
"State": {
"OciVersion": "1.0.2-dev",
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 255,
"Error": "",
"StartedAt": "2020-12-02T18:43:22.305560602+01:00",
"FinishedAt": "2020-12-02T18:43:22.331503527+01:00",
"Healthcheck": {
"Status": "",
"FailingStreak": 0,
"Log": null
}
},
"Image": "2ecd1985d086340147546c403c3f2771c52ac872de5916378f3e1c8a4f01d353",
"ImageName": "docker.io/geerlingguy/docker-fedora32-ansible:latest",
"Rootfs": "",
"Pod": "",
"ResolvConfPath": "/run/user/1000/containers/vfs-containers/12e5de73717c3cfc33835d62ca0c8569eedd805c80e069a25c5e4a5312bee96f/userdata/resolv.conf",
"HostnamePath": "/run/user/1000/containers/vfs-containers/12e5de73717c3cfc33835d62ca0c8569eedd805c80e069a25c5e4a5312bee96f/userdata/hostname",
"HostsPath": "/run/user/1000/containers/vfs-containers/12e5de73717c3cfc33835d62ca0c8569eedd805c80e069a25c5e4a5312bee96f/userdata/hosts",
"StaticDir": "/home/ubuntu/.local/share/containers/storage/vfs-containers/12e5de73717c3cfc33835d62ca0c8569eedd805c80e069a25c5e4a5312bee96f/userdata",
"OCIConfigPath": "/home/ubuntu/.local/share/containers/storage/vfs-containers/12e5de73717c3cfc33835d62ca0c8569eedd805c80e069a25c5e4a5312bee96f/userdata/config.json",
"OCIRuntime": "runc",
"LogPath": "/home/ubuntu/.local/share/containers/storage/vfs-containers/12e5de73717c3cfc33835d62ca0c8569eedd805c80e069a25c5e4a5312bee96f/userdata/ctr.log",
"LogTag": "",
"ConmonPidFile": "/run/user/1000/containers/vfs-containers/12e5de73717c3cfc33835d62ca0c8569eedd805c80e069a25c5e4a5312bee96f/userdata/conmon.pid",
"Name": "strange_northcutt",
"RestartCount": 0,
"Driver": "vfs",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"EffectiveCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"BoundingCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"ExecIDs": [],
"GraphDriver": {
"Name": "vfs",
"Data": null
},
"Mounts": [
{
"Type": "volume",
"Name": "6bd62a96ccada6074290d1086455bb5232cc88ef2b61e93ddf850ae44fa97631",
"Source": "/home/ubuntu/.local/share/containers/storage/volumes/6bd62a96ccada6074290d1086455bb5232cc88ef2b61e93ddf850ae44fa97631/_data",
"Destination": "/run",
"Driver": "local",
"Mode": "",
"Options": [
"nodev",
"exec",
"nosuid",
"rbind"
],
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "bea2706affbf030d6c730cfbfe7dbec10eae3a0940ce323d2f5ea48812b05f36",
"Source": "/home/ubuntu/.local/share/containers/storage/volumes/bea2706affbf030d6c730cfbfe7dbec10eae3a0940ce323d2f5ea48812b05f36/_data",
"Destination": "/sys/fs/cgroup",
"Driver": "local",
"Mode": "",
"Options": [
"nodev",
"exec",
"nosuid",
"rbind"
],
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "a6c9819aa7151a6defde91f1073e91f36325582748863b5f423353b19b414fc0",
"Source": "/home/ubuntu/.local/share/containers/storage/volumes/a6c9819aa7151a6defde91f1073e91f36325582748863b5f423353b19b414fc0/_data",
"Destination": "/tmp",
"Driver": "local",
"Mode": "",
"Options": [
"nodev",
"exec",
"nosuid",
"rbind"
],
"RW": true,
"Propagation": "rprivate"
}
],
"Dependencies": [],
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": ""
},
"ExitCommand": [
"/usr/bin/podman",
"--root",
"/home/ubuntu/.local/share/containers/storage",
"--runroot",
"/run/user/1000/containers",
"--log-level",
"error",
"--cgroup-manager",
"cgroupfs",
"--tmpdir",
"/run/user/1000/libpod/tmp",
"--runtime",
"runc",
"--storage-driver",
"vfs",
"--events-backend",
"journald",
"container",
"cleanup",
"12e5de73717c3cfc33835d62ca0c8569eedd805c80e069a25c5e4a5312bee96f"
],
"Namespace": "",
"IsInfra": false,
"Config": {
"Hostname": "12e5de73717c",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": true,
"OpenStdin": true,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"TERM=xterm",
"container=docker",
"pip_packages=ansible",
"DISTTAG=f32container",
"FGC=f32",
"FBR=f32",
"HOSTNAME=12e5de73717c",
"HOME=/root"
],
"Cmd": [
"/usr/sbin/init"
],
"Image": "docker.io/geerlingguy/docker-fedora32-ansible:latest",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": "",
"OnBuild": null,
"Labels": {
"maintainer": "Jeff Geerling"
},
"Annotations": {
"io.container.manager": "libpod",
"io.kubernetes.cri-o.Created": "2020-12-02T18:43:15.570984377+01:00",
"io.kubernetes.cri-o.TTY": "true",
"io.podman.annotations.autoremove": "FALSE",
"io.podman.annotations.init": "FALSE",
"io.podman.annotations.privileged": "FALSE",
"io.podman.annotations.publish-all": "FALSE",
"org.opencontainers.image.stopSignal": "37"
},
"StopSignal": 37,
"CreateCommand": [
"podman",
"run",
"-it",
"geerlingguy/docker-fedora32-ansible:latest"
],
"SystemdMode": true,
"Umask": "0022"
},
"HostConfig": {
"Binds": [
"6bd62a96ccada6074290d1086455bb5232cc88ef2b61e93ddf850ae44fa97631:/run:rprivate,rw,nodev,exec,nosuid,rbind",
"bea2706affbf030d6c730cfbfe7dbec10eae3a0940ce323d2f5ea48812b05f36:/sys/fs/cgroup:rprivate,rw,nodev,exec,nosuid,rbind",
"a6c9819aa7151a6defde91f1073e91f36325582748863b5f423353b19b414fc0:/tmp:rprivate,rw,nodev,exec,nosuid,rbind"
],
"CgroupManager": "cgroupfs",
"CgroupMode": "host",
"ContainerIDFile": "",
"LogConfig": {
"Type": "k8s-file",
"Config": null
},
"NetworkMode": "slirp4netns",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": [],
"CapDrop": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": [],
"GroupAdd": [],
"IpcMode": "private",
"Cgroup": "",
"Cgroups": "default",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "private",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [],
"Tmpfs": {},
"UTSMode": "private",
"UsernsMode": "",
"ShmSize": 65536000,
"Runtime": "oci",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": 0,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"CgroupConf": null
}
}
] |
Systemd mode is enabled, so this has to be the cgroups v1 version of our logic for mounting |
It will be so nice when cgroups v1 disappears. |
A friendly reminder that this issue had no activity for 30 days. |
@giuseppe Did you patch on merging /sys/fs/cgroup fix this issue? |
I don't think that patch could help here. Does it make a difference if you wrap the command with
|
Same error
|
@giuseppe is this us setting up cgroups incorrectly in rootless mode on cgroups V1? |
A friendly reminder that this issue had no activity for 30 days. |
@giuseppe Ping again. |
my first bet is that we don't detect the systemd mode. Could you try adding |
This is now with HWE kernel 5.8.0-44-generic. |
It might help to enable debug logging on systemd in the container - I believe that adding |
|
Sorry, I should have been more clear - we need |
It seems the entrypoint in the image is wrong, I had to set it manually run with log-level=debug
Sometimes the container starts successfully (probably every 4th try) and this is the systemd log from a successful start and stop: successful run
|
This has to be it. Can you check what's mounted on |
|
I collapsed my issue and description as it is not related to the issue mentioned here, but may help others in similar situation as mine. My issue related to 18.04 (not 20.04). Fixable by installing "libpam-cgfs".Hi, in my case this was cause `/sys/fs/cgroup/systemd/user.slice/user-1000.slice/session-X.scope` being owned by root user instead of non root user which I am running rootless from. I am not sure what caused this but on my it got resolved after numerous retries and restarts on one of my boxes.If I try to run systemd on centos container it does not require ownership of session-X..scope. But ubuntu does 🤔 -- Edit 2 --
did the trick. -- Edit 3 -- -- Edit 4 -- |
could you try wrapping your command with
For a quick test you could run bash instead of podman and check the owner of the current cgroup |
My bad. My issue was actually related to 18.04 and not 20.04. 18.04 issue can be fixed by installing |
As requested here is my try to run it with systemd-run
I'm not sure what to do in bash, any specific command? `ls -la /sys/fs/cgroup/systemd/user.slice/user-1002.slice/`
Not sure if it helps. |
A friendly reminder that this issue had no activity for 30 days. |
Hi, I just managed to hit this issue again on Ubuntu 20.04. I used |
So we have a workaround. |
Well @rhatdan I am not sure whether this is the case. @c-goes said that libpam-cfgs did not work for him so I am sceptic about machinectl. @c-goes can you try whether machinectl shell fixes your issue? Also double check whether exists |
dbus shouldn't be a problem in my case as I always log in as the user running podman (via SSH). Thus, I think I can't use machinectl because I want to use the Ansible modules for Podman. I tried with libpam-cgfs installed and machinectl from root, logging into my user but this didn't help.
|
I think this problem does not occur with every image. Is there a specific Fedora image with systemd I could test? The geerlingguy-systemd images are only used with docker-privileged and aren't tested with podman. With a Debian image from geerlingguy it works reproducibly. This works:
Without systemd-run it also works reproducibly:
Using a fedora image also from geerlingguy it only works every 4th try
I'm not sure if this is issue is related to image or podman. |
I had as well issue to run rootless systemd within systemd service. libpam-cgfs did not help with this. Btw I am running ubuntu only. So I am not sure whether this helps for others scenarios. |
I'm also running into this, but with a different error: $ podman run -ti --systemd=always docker.io/geerlingguy/docker-ubuntu2004-ansible /lib/systemd/systemd
systemd 245.4-4ubuntu3.6 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization podman.
Detected architecture x86-64.
Welcome to Ubuntu 20.04.2 LTS!
Set hostname to <d5b408eef98c>.
Failed to create /user.slice/user-1000.slice/session-17521.scope/init.scope control group: Permission denied
Failed to allocate manager object: Permission denied
[!!!!!!] Failed to allocate manager object.
Exiting PID 1... Running as root with |
Hi @niclashoyer , |
Does |
No it did not in my case. How are you connected btw? Is it ssh? 🤔 |
yes SSH |
Alright I checked with image you use and I am able to replicate @c-goes issue. Approx. 1 in four starts is alright. |
@niclashoyer I checked image you run and issue is in the volume binding. It tries to mount dirs required by systemd - see here. In case of podman this is handled automatically and somehow these two configuration clash. When I removed mentioned line from dockerfile I managed to run the container without issues. So for now I recommend you to build your own image @niclashoyer. @c-goes can you confirm/deny whether are images you tried mounting cgroup, tmp, run same way? @giuseppe @rhatdan what do you think? Is this solvable on podman side any way? |
If the image has volumes on those directories, than Podman should not mount its own volumes there. So removing these volumes makes the most sense. |
Also, I don't believe systemd would work with a non cgroup file system mounted at /sys/fs/cgroup and I am not sure if it would be happy with /run not being a tmpfs. |
The |
I am going to close this issue, since the problem seems to be with the image. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Podman isn't able to start a container with systemd rootless.
Steps to reproduce the issue:
Describe the results you received:
Describe the results you expected:
Normal systemd output with green text. No Cgroup errors.
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
yes
Additional environment details (AWS, VirtualBox, physical, etc.):
physical
The text was updated successfully, but these errors were encountered: