-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Named volume incorrectly mapped with any userns
option but ""
#23347
Comments
I can't replicate on my system - UID 1000 for both volumes, which is what I would expect. Is there something unusual about your environment, or your Podman package (I note you're using a |
Not really. I installed the latest version from a COPR for quick retest, but I originally stumbled into the issue (and downgraded after reporting) on a vanilla Fedora 40 installation from repos (upgraded from a previous Fedora 40 installation). One thing I forgot to mention is that the problem is gone if I don't create (nor declare a Is there anything you would like me to try? I could try to replicate at home using a fresh installation of Fedora 41 (or even a bootable image). |
I checked in a fresh Fedora 40 installation and the problem persists. If I mount another volume whose mountpoint was not created during build, it works as expected. $ podman volume rm test-volume test-volume2 && podman run --rm -it --userns=keep-id -v test-volume:/home/test-user/ -v test-volume2:/home/test-user/test-dir2 test-image ls -ln /home/test-user
drwxr-sr-x 2 999 999 4096 Jul 19 18:31 test-dir
drwxr-sr-x 2 1000 1000 4096 Jul 19 18:31 test-dir2 |
What is the UID of the user running Podman? |
UID=GID=1000 the only non-root, non-service user in the system. |
A friendly reminder that this issue had no activity for 30 days. |
Just stumbled across this, same thing happening to me. Trying to run via |
Which one are you using? If podman-compose, please open the issue there. If you can get this issue to happen with standard podman or podman-remote that would help us diagnose the issue much easier. |
So I'm using |
Here's an example that's just pure podman: FROM debian:12-slim
ARG DEFAULT_UID=1000
ARG DEFAULT_GID=1000
ENV DEFAULT_UID $DEFAULT_UID
ENV DEFAULT_GID $DEFAULT_GID
ENV PUSER "phteven"
ENV PGROUP "phteven"
ENV DEBIAN_FRONTEND noninteractive
ENV TERM xterm
USER root
ENV VOLUME_DIR "/myvol"
RUN apt-get -q update && \
apt-get -y -q --no-install-recommends upgrade && \
apt-get install --no-install-recommends -y -q tini && \
groupadd --gid ${DEFAULT_GID} ${PGROUP} && \
useradd -m --uid ${DEFAULT_UID} --gid ${DEFAULT_GID} ${PUSER} && \
usermod -a -G tty ${PUSER} && \
mkdir -p "${VOLUME_DIR}" && \
chown -R ${PUSER}:${PGROUP} "${VOLUME_DIR}" && \
chmod 750 "${VOLUME_DIR}" && \
apt-get -y -q --allow-downgrades --allow-remove-essential --allow-change-held-packages autoremove && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/*
VOLUME ["$VOLUME_DIR"]
ENTRYPOINT ["/usr/bin/tini"]
CMD ["/usr/bin/sleep", "infinity"] $ podman build -t=podvol .
...
Successfully tagged localhost/podvol:latest
$ podman run --detach --rm --userns keep-id --name podvol podvol
ab74fbb8ac0d85b888675b15b78227e195f60a6a1d6151bcbeca5286aeb0dac7
$ podman exec -i -t podvol bash
root@ab74fbb8ac0d:/# ls -l / | grep myvol
drwxr-x--- 2 999 999 6 Sep 12 20:20 myvol
root@ab74fbb8ac0d:/# id 1000
uid=1000(phteven) gid=1000(phteven) groups=1000(phteven),5(tty)
root@ab74fbb8ac0d:/# id 0
uid=0(root) gid=0(root) groups=0(root)
root@ab74fbb8ac0d:/# id 999
id: '999': no such user Now, if I do the exact same thing only take out the $ podman exec -i -t podvol bash
root@9a24faf00890:/# ls -l / | grep myvol
drwxr-x--- 1 phteven phteven 6 Sep 12 20:20 myvol See how in the first case, My podman info:
The [
{
"Id": "9a24faf008906487c24b43be9f888535be10fc965c1205ffb6a30e9052e30825",
"Created": "2024-09-12T14:28:03.340036788-06:00",
"Path": "/usr/bin/tini",
"Args": [
"/usr/bin/sleep",
"infinity"
],
"State": {
"OciVersion": "1.2.0",
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1154072,
"ConmonPid": 1154068,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-12T14:28:04.955111588-06:00",
"FinishedAt": "0001-01-01T00:00:00Z",
"CgroupPath": "/user.slice/user-1000.slice/[email protected]/user.slice/libpod-9a24faf008906487c24b43be9f888535be10fc965c1205ffb6a30e9052e30825.scope",
"CheckpointedAt": "0001-01-01T00:00:00Z",
"RestoredAt": "0001-01-01T00:00:00Z"
},
"Image": "a0c7c5da2e498931804d7f3cbed370199bc660eb6d24d32f7422f581116fd611",
"ImageDigest": "sha256:065bf7d411fe913f00eadfd2f445d57b309640165803bc2fccbdcbbce0b6adf5",
"ImageName": "localhost/podvol:latest",
"Rootfs": "",
"Pod": "",
"ResolvConfPath": "/run/user/1000/containers/overlay-containers/9a24faf008906487c24b43be9f888535be10fc965c1205ffb6a30e9052e30825/userdata/resolv.conf",
"HostnamePath": "/run/user/1000/containers/overlay-containers/9a24faf008906487c24b43be9f888535be10fc965c1205ffb6a30e9052e30825/userdata/hostname",
"HostsPath": "/run/user/1000/containers/overlay-containers/9a24faf008906487c24b43be9f888535be10fc965c1205ffb6a30e9052e30825/userdata/hosts",
"StaticDir": "/home/user/.local/share/containers/storage/overlay-containers/9a24faf008906487c24b43be9f888535be10fc965c1205ffb6a30e9052e30825/userdata",
"OCIConfigPath": "/home/user/.local/share/containers/storage/overlay-containers/9a24faf008906487c24b43be9f888535be10fc965c1205ffb6a30e9052e30825/userdata/config.json",
"OCIRuntime": "crun",
"ConmonPidFile": "/run/user/1000/containers/overlay-containers/9a24faf008906487c24b43be9f888535be10fc965c1205ffb6a30e9052e30825/userdata/conmon.pid",
"PidFile": "/run/user/1000/containers/overlay-containers/9a24faf008906487c24b43be9f888535be10fc965c1205ffb6a30e9052e30825/userdata/pidfile",
"Name": "podvol",
"RestartCount": 0,
"Driver": "overlay",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"EffectiveCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"BoundingCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"ExecIDs": [
"65db4c33363574f7adefddce265b10b85fbfaf17c915848cb0a3c9ba7023924f"
],
"GraphDriver": {
"Name": "overlay",
"Data": {
"LowerDir": "/home/user/.local/share/containers/storage/overlay/574e22e973fe0b3eb7acf1af8a1b1e57c708b8754d00b2ab50ecfe3cb818aa10/diff:/home/user/.local/share/containers/storage/overlay/e42396fe03bead4cf365f2d1cc8c4c53b21f21da99d2168eb93601896fabe080/diff:/home/user/.local/share/containers/storage/overlay/9853575bc4f955c5892dd64187538a6cd02dba6968eba9201854876a7a257034/diff",
"MergedDir": "/home/user/.local/share/containers/storage/overlay/e86b00bda00dc66db8e4980691d297cbad1e02a140b7db1f6df8fc205b894c8d/merged",
"UpperDir": "/home/user/.local/share/containers/storage/overlay/e86b00bda00dc66db8e4980691d297cbad1e02a140b7db1f6df8fc205b894c8d/diff",
"WorkDir": "/home/user/.local/share/containers/storage/overlay/e86b00bda00dc66db8e4980691d297cbad1e02a140b7db1f6df8fc205b894c8d/work"
}
},
"Mounts": [],
"Dependencies": [],
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/run/user/1000/netns/netns-a6a9c06e-ef43-75dd-5c8d-20a6c41db7cb"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 104,
"Config": {
"Hostname": "9a24faf00890",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PGROUP=phteven",
"DEBIAN_FRONTEND=noninteractive",
"VOLUME_DIR=/myvol",
"DEFAULT_UID=1000",
"PUSER=phteven",
"DEFAULT_GID=1000",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"TERM=xterm",
"HOME=/root",
"HOSTNAME=9a24faf00890"
],
"Cmd": [
"/usr/bin/sleep",
"infinity"
],
"Image": "localhost/podvol:latest",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/bin/tini"
],
"OnBuild": null,
"Labels": {
"io.buildah.version": "1.37.2"
},
"Annotations": {
"io.container.manager": "libpod",
"io.podman.annotations.autoremove": "TRUE",
"org.opencontainers.image.stopSignal": "15",
"org.systemd.property.KillSignal": "15",
"org.systemd.property.TimeoutStopUSec": "uint64 10000000"
},
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"podman",
"run",
"--detach",
"--rm",
"--userns",
"keep-id",
"--name",
"podvol",
"podvol"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "container"
},
"HostConfig": {
"Binds": [],
"CgroupManager": "systemd",
"CgroupMode": "private",
"ContainerIDFile": "",
"LogConfig": {
"Type": "journald",
"Config": null,
"Path": "",
"Tag": "",
"Size": "0B"
},
"NetworkMode": "pasta",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": true,
"Annotations": {
"io.container.manager": "libpod",
"io.podman.annotations.autoremove": "TRUE",
"org.opencontainers.image.stopSignal": "15",
"org.systemd.property.KillSignal": "15",
"org.systemd.property.TimeoutStopUSec": "uint64 10000000"
},
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": [],
"CapDrop": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": [],
"GroupAdd": [],
"IpcMode": "shareable",
"Cgroup": "",
"Cgroups": "default",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "private",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [],
"Tmpfs": {},
"UTSMode": "private",
"UsernsMode": "private",
"IDMappings": {
"UidMap": [
"0:1:1000",
"1000:0:1",
"1001:1001:64536"
],
"GidMap": [
"0:1:1000",
"1000:0:1",
"1001:1001:64536"
]
},
"ShmSize": 65536000,
"Runtime": "oci",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "user.slice",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": 0,
"OomKillDisable": false,
"PidsLimit": 2048,
"Ulimits": [
{
"Name": "RLIMIT_MEMLOCK",
"Soft": 9223372036854775807,
"Hard": 9223372036854775807
},
{
"Name": "RLIMIT_NOFILE",
"Soft": 65535,
"Hard": 65535
},
{
"Name": "RLIMIT_NPROC",
"Soft": 262143,
"Hard": 524287
}
],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"CgroupConf": null
}
}
] Is that enough detail? |
@giuseppe PTAL |
@mmguero The thing leading to the unexpected behavior is using any kind of |
working on a fix in #23977 |
convert the owner UID and GID into the user namespace only when ":idmap" mount is used. This changes the behaviour of :idmap with an empty volume. Now the existing directory ownership is copied up as in the other case. Closes: containers#23347 Signed-off-by: Giuseppe Scrivano <[email protected]>
convert the owner UID and GID into the user namespace only when ":idmap" mount is used. This changes the behaviour of :idmap with an empty volume. Now the existing directory ownership is copied up as in the other case. Closes: containers#23347 Closes: https://issues.redhat.com/browse/RHEL-67842 Signed-off-by: Giuseppe Scrivano <[email protected]> (cherry picked from commit 4323252)
convert the owner UID and GID into the user namespace only when ":idmap" mount is used. This changes the behaviour of :idmap with an empty volume. Now the existing directory ownership is copied up as in the other case. Closes: containers#23347 Signed-off-by: Giuseppe Scrivano <[email protected]> (cherry picked from commit 4323252)
convert the owner UID and GID into the user namespace only when ":idmap" mount is used. This changes the behaviour of :idmap with an empty volume. Now the existing directory ownership is copied up as in the other case. Closes: containers#23347 Closes: https://issues.redhat.com/browse/RHEL-67842 Signed-off-by: Giuseppe Scrivano <[email protected]> (cherry picked from commit 4323252)
convert the owner UID and GID into the user namespace only when ":idmap" mount is used. This changes the behaviour of :idmap with an empty volume. Now the existing directory ownership is copied up as in the other case. Closes: containers#23347 Closes: https://issues.redhat.com/browse/RHEL-67842 Signed-off-by: Giuseppe Scrivano <[email protected]> (cherry picked from commit 4323252) Signed-off-by: Giuseppe Scrivano <[email protected]>
convert the owner UID and GID into the user namespace only when ":idmap" mount is used. This changes the behaviour of :idmap with an empty volume. Now the existing directory ownership is copied up as in the other case. Closes: containers#23347 Closes: https://issues.redhat.com/browse/RHEL-67842 Signed-off-by: Giuseppe Scrivano <[email protected]> (cherry picked from commit 4323252) Signed-off-by: Giuseppe Scrivano <[email protected]>
Issue Description
I am running rootless podman. If I run with any option but
""
foruserns
then, whenever I try to mount a volume on a rootless image at a mount point created during build, the ownership of the directory get corrupted. In particular, I seeUID=GID=999
.Steps to reproduce the issue
Steps to reproduce the issue
Dockerfile
that creates a non-root user and directory inside his$HOME
$ podman build -t test-image .
userns
option butuid=999,gid=999
, and get$ podman volume rm test-volume && podman run --rm -it --userns=keep-id -v test-volume:/home/test-user/test-dir test-image ls -ln /home/test-user total 4 drwxr-sr-x 2 999 999 4096 Jul 19 18:31 test-dir
Describe the results you received
I get ownership
999
for the volume's UID.Describe the results you expected
I would expect to get
1000
for the volume's UID.podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
Additional environment details
Additional information
I still get confused about ns mapping, as you might realize. But I would think this is not the expected behavior, otherwise it would be impossible (or I cannot see how it would be done) to have both bind mounts and volume mounts, in the case that the volume mount overrides something previously written during build, for the same container.
The text was updated successfully, but these errors were encountered: