-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running rootless Podman in Systemd unit with PrivateTmp=true
fails
#14106
Comments
I just tried with the default options from |
Thanks for reaching out, @runiq. Before going into the details a warning. We can only support the exact units generated by Back to your issue: I cannot reproduce when only adding
Can you retry with the upper example? |
Ah, I wasn't aware of that! I guess this issue can be closed then, as it only appears to happen with the above constellation (that is, a systemd system unit with
I was about to file a few more issues regarding systemd hardening options in this unit file, so it's good to know I don't have to. Sorry for failing to find these issues, I'll have another look through the issue list.
Yes, that one works fine. I'll close this issue then. |
Just a quick question: What's the perspective on supporting systemd system units with |
Here's the issue: #12778
To me it looks like a blocker on the systemd-side of things. Podman cannot do much. Note that if you want to run the units as an ordinary user, you can do that |
Oh, right, I even commented in that one. I'll post my results in that issue then. (By the way, also a big 'thank you!' for your Podman series on heise.de. Just went through it, it's a great introductory read.) |
Very kind of you, thanks a lot :) |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Running podman as a rootless user from a systemd system unit with
PrivateTmp=true
hoses the user's podman installation and requirespodman system reset
.Steps to reproduce the issue:
Set up rootless podman for your user
sudo systemctl start ./foo.service
, wherefoo.service
is this:Start the unit
Restart the unit
Describe the results you received:
/var/tmp/<numbers>
is different every time (I mean the numbers are different).podman images
from this account throws<numbers>
is different on every invocation ofpodman images
.$TMPDIR
to a path outside of/var/tmp
or/tmp
or dopodman system reset --force
, which throws this:<numbers>
are random in both lines.Describe the results you expected:
TMPDIR
or doingpodman system reset
Additional information you deem important (e.g. issue happens only occasionally):
The unit was originally generated from
podman systemd generate
and augmented withcgroups=split
,Delegate=true
,KillMode=mixed
from #6666.Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
I have checked the troubleshooting guide, but I have not tested with the latest podman.
Additional environment details (AWS, VirtualBox, physical, etc.):
Stock Fedora 35, up to date.
The text was updated successfully, but these errors were encountered: