Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

named volume created when running with userns=keep-id has wrong ownership #5698

Closed
stintel opened this issue Apr 1, 2020 · 22 comments · Fixed by #6747
Closed

named volume created when running with userns=keep-id has wrong ownership #5698

stintel opened this issue Apr 1, 2020 · 22 comments · Fixed by #6747
Assignees
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@stintel
Copy link

stintel commented Apr 1, 2020

/kind bug

Description

When running podman with userns=keep-id and a named volume, the volume ownership is wrong.

Steps to reproduce the issue:

  1. podman volume rm test

  2. podman run --rm -it --userns=keep-id --mount 'type=volume,src=test,dst=/home/user' --workdir /home/user ubuntu:18.04

  3. touch test

Describe the results you received:

1000@1da098d4a317:~$ touch test
touch: cannot touch 'test': Permission denied
1000@1da098d4a317:~$

Describe the results you expected:

1000@1da098d4a317:~$ touch test
1000@1da098d4a317:~$

Additional information you deem important (e.g. issue happens only occasionally):

When running with --userns=keep-id, the volume should be created with the real UID/GID of the user calling podman, so that it is writeable inside the container.

$ podman volume inspect test
[
     {
          "Name": "test",
          "Driver": "local",
          "Mountpoint": "/home/stijn/.local/share/containers/storage/volumes/test/_data",
          "CreatedAt": "2020-04-01T21:57:54.956706206+03:00",
          "Labels": {

          },
          "Scope": "local",
          "Options": {

          },
          "UID": 1,
          "GID": 1
     }
]
$ sudo ls -la /home/stijn/.local/share/containers/storage/volumes/test/_data
total 0
drwxr-xr-x 1 1065536 1065536  0 apr  1 21:57 .
drwx------ 1 1065536 1065536 10 apr  1 21:57 ..

Output of podman version:

Version:            1.8.0
RemoteAPI Version:  1
Go Version:         go1.13.8
Built:              Tue Mar 10 15:59:20 2020
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.13.8
  podman version: 1.8.0
host:
  BuildahVersion: 1.13.1
  CgroupVersion: v1
  Conmon:
    package: Unknown
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.11, commit: ff9d97a08d7a4b58267ac03719786e4e7258cecf'
  Distribution:
    distribution: gentoo
    version: unknown
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 100
      size: 1
    - container_id: 1
      host_id: 1065536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 1065536
      size: 65536
  MemFree: 105845710848
  MemTotal: 270443495424
  OCIRuntime:
    name: runc
    package: Unknown
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc10
      commit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
      spec: 1.0.1-dev
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 40
  eventlogger: journald
  hostname: taz
  kernel: 5.5.9-gentoo
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: Unknown
    Version: |-
      slirp4netns version 0.4.3
      commit: 2244b9b6461afeccad1678fac3d6e478c28b4ad6
  uptime: 57h 9m 15.77s (Approximately 2.38 days)
registries:
  search:
  - docker.io
  - quay.io
  - registry.fedoraproject.org
store:
  ConfigFile: /home/stijn/.config/containers/storage.conf
  ContainerStore:
    number: 9
  GraphDriverName: vfs
  GraphOptions: {}
  GraphRoot: /home/stijn/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 11
  RunRoot: /run/user/1000/containers
  VolumePath: /home/stijn/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

NA (Gentoo)

Additional environment details (AWS, VirtualBox, physical, etc.):

Running on physical Gentoo, rootless mode. No SELinux.

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Apr 1, 2020
@rhatdan
Copy link
Member

rhatdan commented Apr 2, 2020

@mheon This means that the volume was created to be owned by root of the user namespace rather then the default user of the container.

Since --userns=keep-id implies --user CURRENTUSER we need to make sure this is passed down the stack.

@stintel If you executed the same command but added --user stintel (or whatever your username is) does it do the right thing.

@giuseppe
Copy link
Member

giuseppe commented Apr 2, 2020

@rhatdan it seems we are always creating volumes as root, even without user namespaces involved.

@mheon
Copy link
Member

mheon commented Apr 2, 2020

Inspect on the volume says that the UID/GID the volume was created with was 1 - are we sure this isn't a volume as the first non-root UID/GID?

@mheon
Copy link
Member

mheon commented Apr 2, 2020

@stintel Can you provide an ls -al of the volume from within a container, to show what it's owned by within the user namespace?

@stintel
Copy link
Author

stintel commented Apr 2, 2020

@stintel Can you provide an ls -al of the volume from within a container, to show what it's owned by within the user namespace?

stijn@taz ~ $ podman run --rm -it --userns=keep-id --mount 'type=volume,src=test,dst=/home/user' --workdir /home/user ubuntu:18.04
1000@669e1d35a863:~$ ls -lad /home/user
drwxr-xr-x 1 root root 0 Apr  1 18:57 /home/user

@stintel If you executed the same command but added --user stintel (or whatever your username is) does it do the right thing.

stijn@taz ~ $ podman run --rm -it --userns=keep-id --mount 'type=volume,src=test,dst=/home/user' --workdir /home/user --user stijn ubuntu:18.04
Error: unable to find user stijn: no matching entries in passwd file
stijn@taz ~ $ podman run --rm -it --userns=keep-id --mount 'type=volume,src=test,dst=/home/user' --workdir /home/user --user 1000 ubuntu:18.04
1000@8b4070b9d26c:~$ ls -lad /home/user
drwxr-xr-x 1 root root 0 Apr  1 18:57 /home/user
1000@8b4070b9d26c:~$

@rhatdan
Copy link
Member

rhatdan commented Apr 2, 2020

So the bug is builtin volumes are not being created based on the user of the container, but always as root of the container. I thought we fixed this, maybe it works as root.

@rhatdan
Copy link
Member

rhatdan commented Apr 2, 2020

Broken as root as well.

# podman run -v test:/test --user 1000 fedora ls -ld /test
drwxr-xr-x. 2 root root 6 Apr  2 21:55 /test

@rhatdan rhatdan added the Good First Issue This issue would be a good issue for a first time contributor to undertake. label Apr 2, 2020
@rhatdan
Copy link
Member

rhatdan commented Apr 2, 2020

@sujil02 or someone else looking to contribute. @kunalkushwaha ???

The basic idea is builtin volumes should be created with the owneship of the user of the container.

In the example above the /test should have been owned by UID 1000 GID 1000.

In the --userns keep-id case, it should be created owned by the user creating the container.

@mheon
Copy link
Member

mheon commented Apr 2, 2020

I know we have code for this. Looks like it got broken at some point.

@sujil02
Copy link
Member

sujil02 commented Apr 3, 2020

@sujil02 or someone else looking to contribute. @kunalkushwaha ???

The basic idea is builtin volumes should be created with the owneship of the user of the container.

In the example above the /test should have been owned by UID 1000 GID 1000.

In the --userns keep-id case, it should be created owned by the user creating the container.

Looking into it.

@sujil02
Copy link
Member

sujil02 commented Apr 13, 2020

@mheon
I looked into this. I got the glitch. It's basically when we run a container with volume and if the volume doesn't exist it basically creates a new volume with default as root UID, GID
Ref: https://github.com/containers/libpod/blob/master/libpod/runtime_ctr.go#L312

However, What I am not sure of is can I pick the user flag params from ctr.config.User or there need to be some binding updates done to get the right UID mapped?

@rhatdan
Copy link
Member

rhatdan commented Apr 13, 2020

I believe you can pick the ctr.config.User

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented May 14, 2020

@sujil02 Did you ever fix this problem?

@sujil02
Copy link
Member

sujil02 commented May 14, 2020

I did try a lot working around it. The current ctr.config.User is basically a user name string. And to pass appropriate user details to volumes binding might need to be updated to hold uid. I will give it a go today again and see if I might have been mistaken.

@rhatdan
Copy link
Member

rhatdan commented May 14, 2020

There are calls in buildah that translate the User field to a UID, by reading the /etc/passwd inside of the container.
"github.com/containers/buildah/pkg/chrootuser"

uid, gid, _, err := chrootuser.GetUser(s.mountPoint, s.builder.User())

@sujil02
Copy link
Member

sujil02 commented May 14, 2020

oh wow, this is great certainly helps. On it.

@rhatdan
Copy link
Member

rhatdan commented Jun 9, 2020

@sujil02 What is the scoop on this one?

@vrothberg vrothberg assigned vrothberg and unassigned sujil02 Jun 23, 2020
@vrothberg
Copy link
Member

Looks like the behaviour changed.

--userns=keep-id

libpod (fix-5698) $ ./bin/podman run --rm -it --userns=keep-id --mount 'type=volume,src=foo,dst=/home/user' --workdir /home/user ubuntu:18.04
groups: cannot find name for group ID 1000
I have no name!@876a09ab2791:/home/user$ touch f
I have no name!@876a09ab2791:/home/user$ ls -la f
-rw-r--r--. 1 1000 1000 0 Jun 23 12:20 f
I have no name!@876a09ab2791:/home/user$ 

--user=1000

libpod (fix-5698) $ ./bin/podman run --rm -it --user=1000 --mount 'type=volume,src=foo,dst=/home/user' --workdir /home/user ubuntu:18.04
1000@64c9dd33e2f4:/home/user$ touch f
touch: cannot touch 'f': Permission denied
1000@64c9dd33e2f4:/home/user$ ls -la
total 8
drwxr-xr-x. 2 root root 4096 Jun 23 12:21 .
drwxr-xr-x. 3 root root 4096 Jun 23 12:21 ..
1000@64c9dd33e2f4:/home/user$ 

@rhatdan how do you want it to behave?

@rhatdan
Copy link
Member

rhatdan commented Jun 23, 2020

The newly created directory should be owned by User 1000 (Relative to the User Namespace)

@rhatdan
Copy link
Member

rhatdan commented Jun 23, 2020

Any volume that is created for a container should be owned by the "user" for pid1, So if you run as root, then it owned by root of the usernamespace. If you do --userns=keep-id, then the container will run as your UID, so the volume should be owned by your UID.
If the container is going to be run as --user 1000, then this means that the volume should be owned by UID 1000 inside of the container. By default on single user laptops, this is UID 100999 on the host, if I did not screw up the math.

@vrothberg
Copy link
Member

Already on it. Writing tests at the moment :)

vrothberg added a commit to vrothberg/libpod that referenced this issue Jun 23, 2020
When setting up the volumes for a container, make sure that new volumes
are created with the correct uid/gid which we're looking up in the
container config.

Fixes: containers#5698
Signed-off-by: Valentin Rothberg <[email protected]>
giuseppe added a commit to giuseppe/libpod that referenced this issue Jun 29, 2020
move the chown for newly created volumes after the spec generation so
the correct UID/GID are known.

Closes: containers#5698

Signed-off-by: Giuseppe Scrivano <[email protected]>
mheon pushed a commit to mheon/libpod that referenced this issue Jul 6, 2020
move the chown for newly created volumes after the spec generation so
the correct UID/GID are known.

Closes: containers#5698

Signed-off-by: Giuseppe Scrivano <[email protected]>
mheon added a commit to mheon/libpod that referenced this issue Mar 24, 2021
As part of a fix for an earlier bug (containers#5698) we added the ability
for Podman to chown volumes to correctly match the user running
in the container, even in adverse circumstances (where we don't
know the right UID/GID until very late in the process). However,
we only did this for volumes created automatically by a
`podman run` or `podman create`. Volumes made by
`podman volume create` do not get this chown, so their
permissions may not be correct. I've looked, and I don't think
there's a good reason not to do this chwon for all volumes the
first time the container is started.

I would prefer to do this as part of volume copy-up, but I don't
think that's really possible (copy-up happens earlier in the
process and we don't have a spec). There is a small chance, as
things stand, that a copy-up happens for one container and then
a chown for a second, unrelated container, but the odds of this
are astronomically small (we'd need a very close race between two
starting containers).

Fixes containers#9608

Signed-off-by: Matthew Heon <[email protected]>
mheon added a commit to mheon/libpod that referenced this issue Mar 29, 2021
As part of a fix for an earlier bug (containers#5698) we added the ability
for Podman to chown volumes to correctly match the user running
in the container, even in adverse circumstances (where we don't
know the right UID/GID until very late in the process). However,
we only did this for volumes created automatically by a
`podman run` or `podman create`. Volumes made by
`podman volume create` do not get this chown, so their
permissions may not be correct. I've looked, and I don't think
there's a good reason not to do this chwon for all volumes the
first time the container is started.

I would prefer to do this as part of volume copy-up, but I don't
think that's really possible (copy-up happens earlier in the
process and we don't have a spec). There is a small chance, as
things stand, that a copy-up happens for one container and then
a chown for a second, unrelated container, but the odds of this
are astronomically small (we'd need a very close race between two
starting containers).

Fixes containers#9608

Signed-off-by: Matthew Heon <[email protected]>
jmguzik pushed a commit to jmguzik/podman that referenced this issue Apr 26, 2021
As part of a fix for an earlier bug (containers#5698) we added the ability
for Podman to chown volumes to correctly match the user running
in the container, even in adverse circumstances (where we don't
know the right UID/GID until very late in the process). However,
we only did this for volumes created automatically by a
`podman run` or `podman create`. Volumes made by
`podman volume create` do not get this chown, so their
permissions may not be correct. I've looked, and I don't think
there's a good reason not to do this chwon for all volumes the
first time the container is started.

I would prefer to do this as part of volume copy-up, but I don't
think that's really possible (copy-up happens earlier in the
process and we don't have a spec). There is a small chance, as
things stand, that a copy-up happens for one container and then
a chown for a second, unrelated container, but the odds of this
are astronomically small (we'd need a very close race between two
starting containers).

Fixes containers#9608

Signed-off-by: Matthew Heon <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
7 participants