Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman user mode doesn't work after uid change #11377

Closed
ananthb opened this issue Aug 31, 2021 · 26 comments
Closed

Podman user mode doesn't work after uid change #11377

ananthb opened this issue Aug 31, 2021 · 26 comments
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue volunteers-wanted Issues good for community/volunteer contributions

Comments

@ananthb
Copy link
Contributor

ananthb commented Aug 31, 2021

/kind bug

Description
I changed my user account's id from 1001 to 1000 on a system where I had already started using podman as that user.
After changing ids, all podman operations fail with Error: error creating tmpdir: mkdir /run/user/1001: permission denied.

Steps to reproduce the issue:

  1. Create a user account
  2. Use podman with this account to build images and run containers.
  3. Change user and group id using usermod -u <new-uid> <user> && usermod -g <new-gid> <group>.
  4. Reboot
  5. Run podman and see permission error

Describe the results you received:
Podman fails trying to create a run directory for the wrong user id.

Describe the results you expected:
Podman works correctly with the new user id.

Additional information you deem important (e.g. issue happens only occasionally):
Root podman still works correctly on this machine. I'm unable to run even podman version as my user.

Output of podman version:

Version:      3.0.1
API Version:  3.0.0
Go Version:   go1.16
Built:        Thu Jan  1 05:30:00 1970
OS/Arch:      linux/arm64

Output of podman info --debug:

host:
  arch: arm64
  buildahVersion: 1.19.6
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/bin/conmon'
    path: /usr/bin/conmon
    version: 'conmon version 2.0.25, commit: unknown'
  cpus: 4
  distribution:
    distribution: ubuntu
    version: "21.04"
  eventLogger: journald
  hostname: wopr
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.11.0-1016-raspi
  linkmode: dynamic
  memFree: 235421696
  memTotal: 3974946816
  ociRuntime:
    name: runc
    package: 'runc: /usr/sbin/runc'
    path: /usr/sbin/runc
    version: |-
      runc version 1.0.0~rc95-0ubuntu1~21.04.2
      spec: 1.0.2-dev
      go: go1.16.2
      libseccomp: 2.5.1
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: true
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    selinuxEnabled: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 4567515136
  swapTotal: 4730044416
  uptime: 26h 20m 5.21s (Approximately 1.08 days)
registries: {}
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 24
    paused: 0
    running: 21
    stopped: 3
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 13
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 0
  BuiltTime: Thu Jan  1 05:30:00 1970
  GitCommit: ""
  GoVersion: go1.16
  OsArch: linux/arm64
  Version: 3.0.1

Package info (e.g. output of rpm -q podman or apt list podman):

Listing... Done
podman/hirsute,now 3.0.1+dfsg1-1ubuntu1 arm64 [installed]

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):
Physical on a raspberry pi 4.

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 31, 2021
@matejvasek
Copy link
Contributor

@ananthb don't you need restart or re-login (using the su command) to make such a change?

@matejvasek
Copy link
Contributor

AFAIK even mere addition of a user to a group won't take effect immediately I would expect similar for uid.

@ananthb
Copy link
Contributor Author

ananthb commented Aug 31, 2021

This was after rebooting.

@matejvasek
Copy link
Contributor

what is output of echo "$XDG_RUNTIME_DIR"?

@mheon
Copy link
Member

mheon commented Aug 31, 2021

Podman caches the temporary files directory in use in the database, to ensure it always remains constant; in this case, we likely cached /run/user/1001 and now that is no longer your user's given rundir, and inaccessible to you. Unfortunately, the path may be cached throughout the database; there's no easy way to work around it. You will probably need to reset your storage with podman system reset (losing all containers and images).

@ananthb
Copy link
Contributor Author

ananthb commented Sep 1, 2021

@matejvasek

$ echo "$XDG_RUNTIME_DIR"
/run/user/1000

@mheon podman system reset also fails with the same error. I finally fixed it by manually deleting $HOME/.local/share/containers, but I'm interested in figuring out how to fix it permanently.

@mheon
Copy link
Member

mheon commented Sep 1, 2021

That should be a permanent fix - the DB is gone, so we'll now cache your new paths, and Podman should go back to working as expected.

If you plan on changing UID/GID again, this will unfortunately happen again; it's not really something we accounted for in Podman's architecture.

@ananthb
Copy link
Contributor Author

ananthb commented Sep 1, 2021

@mheon would it be possible to detect the change and update the DB on podman start up?

@mheon
Copy link
Member

mheon commented Sep 1, 2021

Could theoretically be added to podman system migrate with the caveat that all containers would have to be stopped when it was run. Depends on how many places we're storing the path - could get very difficult to locate them and rewrite.

@ananthb
Copy link
Contributor Author

ananthb commented Sep 2, 2021

I'm willing to take a look. Any pointers on where I can start reading the DB code? @mheon @matejvasek

@giuseppe
Copy link
Member

@ananthb had a chance to look at libpod? Are you still interested to work on it?

@ananthb
Copy link
Contributor Author

ananthb commented Sep 30, 2021

@giuseppe I'm still interested but I haven't had the time to look at it yet. I'm going to start now.

@mheon
Copy link
Member

mheon commented Sep 30, 2021

The DB interface code lives in https://github.com/containers/podman/blob/main/libpod/boltdb_state.go and https://github.com/containers/podman/blob/main/libpod/boltdb_state_internal.go

I think you're looking at several different stages here - we need to change the runtime-config table to reflect the new paths, then we need to find any pods/containers/volumes that have affected paths and rewrite them. The best way of doing this would be an addition to the podman system migrate which can already do conditional rewrites of container configurations..

@ananthb
Copy link
Contributor Author

ananthb commented Oct 25, 2021

I tried digging into this change but it seems like a lot of effort for not a lot of payoff. I'd like to contribute to podman, but I'd like to try something else out. The easy fix to this is to just nuke the storage folder anyway and that works for me.

Do you want to keep this issue around or close it @mheon? Also is there anything else I can work on? I have some hours I'd like to contribute.

@mheon
Copy link
Member

mheon commented Oct 25, 2021

We can keep it around in case anyone else would like to take a crack at it.

If you'd like to work on an issue, something like #12063 might be good? We've stopped applying the Good First Issue label, unfortunately, I'll try to remember to add it again to simple issues.

@ananthb
Copy link
Contributor Author

ananthb commented Oct 25, 2021

Awesome thanks. Might even circle back to this once I have a better grasp of how things work.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan rhatdan added Good First Issue This issue would be a good issue for a first time contributor to undertake. volunteers-wanted Issues good for community/volunteer contributions and removed stale-issue labels Nov 29, 2021
@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

github-actions bot commented Mar 3, 2022

A friendly reminder that this issue had no activity for 30 days.

@giuseppe
Copy link
Member

giuseppe commented Mar 3, 2022

@ananthb are you still interested to work on this issue?

@ananthb
Copy link
Contributor Author

ananthb commented Mar 3, 2022

Yeah! I definitely have more time to look at this now.

@github-actions
Copy link

github-actions bot commented Apr 3, 2022

A friendly reminder that this issue had no activity for 30 days.

@giuseppe
Copy link
Member

giuseppe commented Apr 3, 2022

I am closing this issue since migrating to a new UID is a very specific corner case and it seems to me not trivial to maintain in the long term as we'll have to deal with different run directories and storage directories, as well as volumes, so every time we add a new feature we must make sure it can be migrated to a new UID.

If you are still interested in working on it though, please feel free to open a PR and it will be easier to evaluate its maintenance costs.

@giuseppe giuseppe closed this as completed Apr 3, 2022
@da2ce7
Copy link

da2ce7 commented Nov 14, 2022

@giuseppe
After migrating laptops, on Fedora 37, I have a new UserID. Now my podman setup is broken. I do not think that this is a corner-case. I believe it is common to migrate users to new systems, and giving them a new UserID.

Btw. The "Error: error creating tmpdir: mkdir /run/user/1000: permission denied" still bocks the podman system reset command from working.

I think that at-least the reset system command should be aware of the possibility that the user has a new ID.

Additionally, the error: "Error: error creating tmpdir: mkdir /run/user/1000: permission denied", is badly worded. Podman should detect if the UserID has changed, and provide an appropriate error.

I recommend that this issue is reopened.

@Mte90
Copy link

Mte90 commented Apr 27, 2023

I have the same issue.
I changed the uid on my system for various reasons (a debian sid machine running since 10 years) and I am in the situation of:
immagine

In my case I removed the folder on .local/share/containers manually but I am still getting errors.
The /run/user/1000/ on my machine is owned by the root user. Maybe those command should give more helpful hints on what to do or atleast understand why podman is using the wrong uid.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Aug 25, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 25, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue volunteers-wanted Issues good for community/volunteer contributions
Projects
None yet
Development

No branches or pull requests

7 participants