-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podman unshare chown doesn't work with externally mounted drives #9646
Comments
I think this is an issue with samba.
If the chown command above does not fail, and yet the directory value is not chowned, then something is happening in the samba protocol that I don't know or understand, and you would be better off asking on a samba board. As far as Podman is concerned, this is not a podman issue, but a file system issue. |
BTW On NFS you would get permission denied above because the NFS Server would see this as the testuser attempting to chown files to 100031, which would not be allowed since the NFS server knows nothing about User Namespace. |
samba mounting already provides a way to specify different UIDs as I described in the alternatives. But, how can someone on samba board understand a They may understand a normal straightforward Not sure how someone on the samba side would understand this kinda logic without linking it to |
I don't think the uid,gid options you can specify at the mount can help when there are multiple users available in the user namespace. chown to root happening from the The issue exists when you try to chown to an user that is not root in the user namespace. Have you tried what @rhatdan suggested here #9646 (comment)? |
Well, that's the point I was making as part of the alternatives explanation. That they are not good enough.
Not sure what you mean? If you read my initial post, that's exactly what I was trying to explain. That the command doesn't fail, but neither does it change the ownership. What am I missing here? According to @rhatdan the problem lies with samba. But, I don't think samba folks can understand this problem without understanding about |
That is a Samba question, And has nothing to do with us. The Chown is happening within a user namespace and samba is ignoring it and returning success to the chown command. I would open up an issue with Samba on this, and cc us if you want. |
To follow this ticket |
I don't think it is Samba's problem as the namespacing is done on the client side prior to sending a request to the server. SMB protocol does not operate on UID/GIDs, it uses SIDs and translation back and forth between IDs and SIDs happens at a client side. Manual page for
Since kernel 4.13, |
It looks like from my reading that this is not something Podman can solve and is between the kernel and the samba client. So I am going to close. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
The following command doesn't change the ownership to a desired uid (other than default
root
UID 0) inside the containerI would like the externally mounted network drives (owned by rootless user) on host machine to be owned by a specific user ID (other than the obvious
root
UID 0) inside the container after they are volume mountedSteps to reproduce the issue:
Spin up a container (from any image of your choice) from within your rootless user's namespace and map some dir that resides inside the externally mounted dir onto some location inside the container.
For e.g. let's say, we volume mounted
/home/testuser/mysamba_share_dir/test
onto/tmp/test
directory inside the containerCreate a separate user inside the container (say) uid is 32
Back on the host machine run the following command from your rootless user's shell:
Describe the results you received:
This is what I see when I do an
ls -l
on the externally mounted directorySo, it's still showing
testuser
i.e., the rootless user as the owner.Describe the results you expected:
At this point I expect to see something like this when I do an
ls -l
on the mounted directory:Additional information you deem important (e.g. issue happens only occasionally):
Of course, this problem doesn't occur if I map a local directory on the host onto the container.
For e.g. if I map
/home/testuser/docs/sample_dir
onto/tmp/test
inside the container.And then when I run
podman unshare chown -R 32:32 /home/testuser/docs/sample_dir
, this will definitely produce the expected result, like below:The problem only happens when I try to do the
unshare
&chown
on an externally mounted directory.Is this a
podman
design limitation? Or is it acifs mount
limitation? Or should I be doing the cifs mounting in a different way in the first place? I don't know.But, I sure have use cases where I would like externally mounted network drives to be owned by specific desired user IDs (other than UID 0) inside the container.
Alternative 1: To do a
cifs mount
using thesubuids
on the hostSomething like this
And then volume mount it onto the container. But, now the entire samba share will owned by UID 32 inside the container. Which is not what I want. There's only a certain directory within the samba share that I would like to be owned by the UID 32, not the entire samba share.
Alternative 2: Create a new samba share on the samba server that points to a sub-directory of the first samba share and mount both shares separately using UID 1000 and UID 10031
Something like this
This alternative means to create a new samba share that only has the specific directory that I intend to be owned by UID 32. And, this gets even more complicated because now I will have two samba shares - one being a subset of the other. One mounted as the rootless user i.e., UID 1000 and the other (which points to a subdirectory within the first samba share) mounted as
subuid
10031.Not sure how feasible this strategy is. Moreover, this would also mean I am demanding the creation of a new samba share on the remote samba server. Something that is not always easy to convince.
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
Cent OS 8 VM or archlinux
The text was updated successfully, but these errors were encountered: