Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error creating libpod runtime: there might not be enough IDs available in the namespace #3421

Closed
juansuerogit opened this issue Jun 24, 2019 · 66 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@juansuerogit
Copy link

juansuerogit commented Jun 24, 2019

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I have RHEL servers in the 7.x range ( i think they are 7.4 or 7.5 ) that we currently run containers on with docker-compose. Went to a Red Hat conference and learned about Podman so want to use Podman in production to help us get away from the big fat deamons and not to run containers as root.

To that end i have created a centos 7.5 VM on my laptop and installed podman. But i cannot seem to get the uidmap functionality to work.

Im hoping that once we solve this uidmap bug im encountering that we can then take this and run it on RHEL 7.4 server.

On the RHEL 7.4 we can only operate as a regular user so we need to figure out rootless podman.

I understand that some changes to the OS are needed and we need adminstrative control to do this. Like the subuid and subgid and the kernal params to enable user namespaces. we can do that. but on a day to day basis including running the production containers we have to be able to run rootless podman and backup and recover the files as the same regular user ( not root )

In addition im not sure how to map an existing user on the container image
for example mongod ( the mongodb user )
to the regular server user. but thats maybe getting ahead of ourselves.

Steps to reproduce the issue:

  1. clean Centos 7.5 VM
  2. logged into a regular user called "meta" (not root)
  3. sudo grubby --args="namespace.unpriv_enable=1 user_namespace.enable=1" --update-kernel="/boot/vmlinuz-3.10.0-957.5.1.el7.x86_64"
  4. sudo yum -y update && sudo yum install -y podman
  5. sudo echo 'user.max_user_namespaces=15076' >> /etc/sysctl.conf
  6. sudo echo 'meta:100000:65536' >> /etc/subuid
  7. sudo echo 'meta:100000:65536' >> /etc/subgid
  8. sudo reboot
  9. podman run -dt --uidmap 0:100000:500 ubuntu sleep 1000

Describe the results you received:

Error: error creating libpod runtime: there might not be enough IDs available in the namespace (requested 100000:100000 for /home/meta/.local/share/containers/storage/vfs): chown /home/meta/.local/share/containers/storage/vfs: invalid argument

Describe the results you expected:

I expected a pod / container which would be running and i could exec into it and
create files inside the container as user root

upon exiting the container i expect those files to be owned by user "meta"

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:            1.3.2
RemoteAPI Version:  1
Go Version:         go1.10.3
OS/Arch:            linux/amd64

Output of podman info --debug:

WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids 
debug:
  compiler: gc
  git commit: ""
  go version: go1.10.3
  podman version: 1.3.2
host:
  BuildahVersion: 1.8.2
  Conmon:
    package: podman-1.3.2-1.git14fdcd0.el7.centos.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.14.0-dev, commit: e0b5a754190a3c24175944ff64fa7add6c8b0431-dirty'
  Distribution:
    distribution: '"centos"'
    version: "7"
  MemFree: 410226688
  MemTotal: 3973316608
  OCIRuntime:
    package: runc-1.0.0-59.dev.git2abd837.el7.centos.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.0'
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 4
  hostname: min0-kube0
  kernel: 3.10.0-957.21.3.el7.x86_64
  os: linux
  rootless: true
  uptime: 2h 25m 41.8s (Approximately 0.08 days)
registries:
  blocked: null
  insecure: null
  search:
  - registry.access.redhat.com
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.centos.org
store:
  ConfigFile: /home/meta/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: vfs
  GraphOptions: null
  GraphRoot: /home/meta/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 0
  RunRoot: /tmp/1000
  VolumePath: /home/meta/.local/share/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.):

Centos 7.5 VM
sudo yum -y update && sudo yum install -y podman
sudo echo 'user.max_user_namespaces=15076' >> /etc/sysctl.conf
sudo echo 'meta:100000:65536' >> /etc/subuid
sudo echo 'meta:100000:65536' >> /etc/subgid
sudo reboot
podman run -dt --uidmap 0:100000:500 ubuntu sleep 1000

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 24, 2019
@mheon
Copy link
Member

mheon commented Jun 24, 2019

--uidmap 0:100000:500 looks like the problem. You're requesting to map to UID 1000000 with rootless Podman (I'm presuming that last Podman command in your reproducer is run without sudo).

You don't need to use --uidmap with rootless Podman - we'll automatically select the UID/GID ranges from subuid and subgid. You only need the uidmap flag if you want to change the way users are allocated within the container (for example, by default, the user launching Podman is mapped into the rootless container as UID 0 - you can change that with a few --uidmap args).

Just running Podman as a non-root user, no extra arguments or special flags (but with a configured /etc/subuid and /etc/subgid), is enough to launch your containers inside an unprivileged user namespace.

@mheon
Copy link
Member

mheon commented Jun 24, 2019

And to provide further clarity on why it fails - --uidmap is trying to map to UID 1000000, which is not mapped into the container. The container only has 65536 UIDs from the ranges in /etc/subuid and /etc/subgid (plus one more - the UID/GID of the user that launches it). Mapping to UID 1000000 and higher won't work, since we don't have any UIDs higher than 65536 available.

@mheon
Copy link
Member

mheon commented Jun 24, 2019

Depends on how you want to use it... There's no requirement that the user running in the container must match the user who ran Podman. However, if you have volumes in the container, and you need to access them from the host, you generally will need to ensure the UIDs match. (Alternatively, you can use podman unshare to get a shell with UID/GID mappings matching the rootless container).

Technically, you'll also need 3 UID maps... One for UIDs below 23, one for 23 itself, one for UIDs about 23. Because of this, we generally recommend just running the service in the container as UID 0 - it's not really root, it's the user that launched the container, so you don't give up anything in terms of security.

@juansuerogit
Copy link
Author

ok thanks that got me past that error but now im running rootless and getting image related errors.

podman run -v /home/meta/backup:/root/backup -dt docker.io/centos:latest sleep 100

note: im using the fully qualified path here because without it i get another type of error.
and further more i cant seem to draw from the my companies registry either even though im docker logged in via their tools. but currently stuck at this error.......

podman run -v /home/meta/backup:/root/backup -dt docker.io/centos:latest sleep 100

WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids
Trying to pull docker.io/centos:latest...Getting image source signatures
Copying blob 8ba884070f61 done
Copying config 9f38484d22 done
Writing manifest to image destination
Storing signatures
ERRO[0026] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 0:54 for /run/lock/lockdev): lchown /run/lock/lockdev: invalid argument
ERRO[0026] Error pulling image ref //centos:latest: Error committing the finished image: error adding layer with blob "sha256:8ba884070f611d31cb2c42eddb691319dc9facf5e0ec67672fcfa135181ab3df": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 0:54 for /run/lock/lockdev): lchown /run/lock/lockdev: invalid argument
Failed
Error: unable to pull docker.io/centos:latest: unable to pull image: Error committing the finished image: error adding layer with blob "sha256:8ba884070f611d31cb2c42eddb691319dc9facf5e0ec67672fcfa135181ab3df": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 0:54 for /run/lock/lockdev): lchown /run/lock/lockdev: invalid argument

@mheon
Copy link
Member

mheon commented Jun 24, 2019

WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids

There's your problem.

Do you have newuidmap and newgidmap binaries installed?

@juansuerogit
Copy link
Author

juansuerogit commented Jun 24, 2019

no the directions at https://github.com/containers/libpod/blob/master/install.md didnt say to do this

cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)

shall i follow these directions ? https://www.scrivano.org/2018/10/12/rootless-podman-from-upstream-on-centos-7/

@juansuerogit
Copy link
Author

In addition when i create the directory manually i cannot exec into the container...

after running mkdir ./backup and then
podman run -v /home/meta/backup:/root/backup -dt docker.io/centos:latest sleep 100

the container can be seen as running with
e1516b7986b9 docker.io/library/centos:latest sleep 100 3 seconds ago Up 2 seconds ago nervous_williamson

but when i try to exec..

podman exec -ti -l bash
exec failed: container_linux.go:345: starting container process caused "process_linux.go:91: executing setns process caused "exit status 22""
Error: exit status 1

@mheon
Copy link
Member

mheon commented Jun 24, 2019

@giuseppe Any idea about that exit status out of runc? Sounds like something we might have fixed in a more recent version.

@mheon
Copy link
Member

mheon commented Jun 24, 2019

RE: the Docker issue - I'll look into this tomorrow. If we're not matching Docker, that's definitely a bug.

@juansuerogit
Copy link
Author

juansuerogit commented Jun 24, 2019

thanks, ill check back tomorrow sometime. fyi my requirement is to be able to run rootless here is docker version...
not sure if they are clashing.
i didnt install runc or anything else

docker version
Client:
Version: 18.09.6

podman version
Version: 1.3.2

@rhatdan
Copy link
Member

rhatdan commented Jun 25, 2019

We explicitly decided not to follow Docker on this one. Creating a bind mount volume on the host when it does not exist. I believe that this is a bug in Docker, since it could lead to user typos, being ignored an unexpected directories/volumes being created.

@rhatdan
Copy link
Member

rhatdan commented Jun 25, 2019

There are other flags in the kernel that need to be set to use User Namespace on RHEL7/Centos 7.
@giuseppe PTAL

@giuseppe
Copy link
Member

I see different issues here. The blog post I've written some time ago seems outdated, I'll need to write another one.

So the first thing: newuidmap/newgidmap seems to be missing, you'll need to install them, or most images won't work (same issue as #3423).

You need to update runc, since the version you are using has different issues with rootless containers, .e.g. it will complain about gid=5 using an unmapped UID even though that UID is present in the user namespace.

Currently upstream podman is broken for RHEL 7.5, the issue is being addressed with #3397

@llchan
Copy link
Contributor

llchan commented Jun 25, 2019

I have podman working on my normal host, but today when I went to try it on a different host I saw the "not enough IDs available" error mentioned here. I must be forgetting a step that I ran on the other host, so if we could put together a pre-flight checklist that would be helpful. Off the top of my head here are the things I checked:

  • newuidmap/newgidmap exist on PATH (version 4.7)
  • runc exists on PATH (version 1.0.0-rc8)
  • slirp4netns exists on PATH (version 0.3.0)
  • conmon exists on PATH (version 1.14.4)
  • /proc/sys/user/max_user_namespaces is large enough (16k)
  • /etc/subuid and /etc/subgid have enough sub ids (64k, offset by a large number)
  • $XDG_RUNTIME_DIR exists
  • I ran podman system migrate to refresh the pause process

What am I forgetting? Is there something I can run to pinpoint the issue?

@mheon
Copy link
Member

mheon commented Jun 25, 2019

Is the image requesting an ID over 65k? Some images do include UIDs in the million range - those can break even for properly configured rootless.

@llchan
Copy link
Contributor

llchan commented Jun 25, 2019

I don't think so, it said (requested 0:42 for /etc/shadow) for the alpine:latest I was testing with.

@rhatdan
Copy link
Member

rhatdan commented Jun 25, 2019

Does podman unshare work?

podman unshare cat /proc/self/uid_map

@llchan
Copy link
Contributor

llchan commented Jun 25, 2019

Yes, I think so:

$ podman unshare cat /proc/self/uid_map
         0      12345          1

unshare -U also appears to work.

@rhatdan
Copy link
Member

rhatdan commented Jun 25, 2019

That indicates that the user executing podman unshare only has one UID 12345
I would guess that /etc/subuid does not have an entry for user 12345 USERNAME.

@llchan
Copy link
Contributor

llchan commented Jun 25, 2019

Did a bit more snooping, looks like the podman log level is not set early enough, so the newuidmap debug output is getting swallowed. I built a binary with that log level bumped up and this is the error that causes the issue:

WARN[0000] error from newuidmap: newuidmap: open of uid_map failed: Permission denied

@mheon
Copy link
Member

mheon commented Jun 25, 2019

Permissions issue on the binary?

@mheon
Copy link
Member

mheon commented Jun 25, 2019

I'll tag @giuseppe in case it isn't that - he might have some ideas

@llchan
Copy link
Contributor

llchan commented Jun 25, 2019

Binary is readable/executable and runs fine, but it looks like it's owned by a user other than root:root (we deploy packages differently to that host). Is it required for it to be root:root to do its magic?

Also, is there any way to detect that the newuidmap version is too old? I have a colleague who ran into an issue with his PATH so it was falling back to the system newuidmap, and something other than an EPERM would have been nice.

@giuseppe
Copy link
Member

Binary is readable/executable and runs fine, but it looks like it's owned by a user other than root:root (we deploy packages differently to that host). Is it required for it to be root:root to do its magic?

yes, newuidmap/newgidmap must be owned by root and it must either have fcaps enabled or installed as setuid.

@juansuerogit
Copy link
Author

So long story short I need to use RHEL 8?

@giuseppe
Copy link
Member

So long story short I need to use RHEL 8?

that will surely help as all the needed pieces are there, including an updated kernel where you can use fuse-overlayfs.

@rhatdan
Copy link
Member

rhatdan commented Jun 26, 2019

getcap /usr/bin/newuidmap
/usr/bin/newuidmap = cap_setuid+ep

If this is not set then this will not work.

@juansuerogit
Copy link
Author

Is there a Podman-Compose? How do i run the same container/container images iterated over in Dev with Podman and Buildah with a deployment to Amazon ECS, Azure AKS or IBM IKS?

@giuseppe
Copy link
Member

@juansuerogit you can use podman generate kube and podman play kube

@bolind
Copy link

bolind commented Apr 28, 2020

I got similar errors, even with correctly configured /etc/subuid and /etc/subgid. Turns out, there's a known issue/bug when your home directory is on NFS. Try something like:

mkdir /tmp/foo && podman --root=/tmp/foo --runroot=/tmp/foo run alpine uname -a

@rhatdan
Copy link
Member

rhatdan commented Apr 28, 2020

NFS homedirs are covered in the troubleshooting guide.

@djmattyg007
Copy link

To clarify, the machine on which I encountered this definitely had no NFS-related anything installed or running.

@andrescodas
Copy link

I'm running on rhel 8.3

I tried to follow your instructions but I still get:

there might not be enough IDs available in the namespace (requested 0:42 for /etc/gshadow): lchown /etc/gshadow: invalid argument

this is my output:

codas:~$ cat /etc/subuid
codas:100000:65536
codas:~$ cat /etc/subgid
codas:100000:65536
codas:~$ ls -ls /usr/bin/newuidmap
44 -rwsr-xr-x. 1 root root 40632 Aug  7  2020 /usr/bin/newuidmap
codas:~$ ls -ls /usr/bin/newgidmap
48 -rwsr-xr-x. 1 root root 44760 Aug  7  2020 /usr/bin/newgidmap
codas:~$ podman system migrate
codas:~$ podman unshare cat /proc/self/uid_map
         0       1000          1

Can someone help me figure out what am I missing?

@mheon
Copy link
Member

mheon commented Feb 21, 2021 via email

@andrescodas
Copy link

@mheon

Have you tried running ‘podman system migrate’?

Yes. It is the second to last command I executed as posted on my previous message here

@mheon
Copy link
Member

mheon commented Feb 23, 2021

@giuseppe Any ideas? It looks like everything should be in order here.

@giuseppe
Copy link
Member

can you show the output of id?

@giuseppe
Copy link
Member

also any reason to use CentOS 7.5 and don't move to 8?

@andrescodas
Copy link

@giuseppe

I'm not running CentOS

codas:~$ hostnamectl
   Static hostname: caperucita
         Icon name: computer-laptop
           Chassis: laptop
        Machine ID: ddd81f6a73a6436690f6572752407787
           Boot ID: 1af9a1e06eee48a1ac73107081d45723
  Operating System: Red Hat Enterprise Linux 8.3 (Ootpa)
       CPE OS Name: cpe:/o:redhat:enterprise_linux:8.3:GA
            Kernel: Linux 4.18.0-240.10.1.el8_3.x86_64
      Architecture: x86-64
codas:~$ id
uid=1000(codas) gid=1001(codas) groups=1001(codas),10(wheel),982(libvirt) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

@giuseppe
Copy link
Member

thanks, can you tell what you get with:

$ getcap /usr/bin/newuidmap /usr/bin/newgidmap
$ unshare -U sleep 1000 &
$ newuidmap $! 0 1000 1 1 100000 65536
$ newgidmap $! 0 1000 1 1 100000 65536

@andrescodas
Copy link

@giuseppe

codas:~$ getcap /usr/bin/newuidmap /usr/bin/newgidmap
/usr/bin/newuidmap = cap_setuid+ep
/usr/bin/newgidmap = cap_setgid+ep
codas:~$ unshare -U sleep 1000 &
[1] 509275
codas:~$ newuidmap $! 0 1000 1 1 100000 65536
newuidmap: open of uid_map failed: Permission denied
codas:~$ newgidmap $! 0 1000 1 1 100000 65536
newgidmap: gid range [0-1) -> [1000-1001) not allowed

should I use sudo?

@AdsonCicilioti
Copy link

Hi guys,

My Outputs:

❯_ ~ cat /etc/subuid
adson:100000:65536

❯_ ~ cat /etc/subgid
adson:100000:65536

❯_ ~ ls -ls /usr/bin/newuidmap
40 -rwxr-xr-x 1 root root 36992 Sep 7 10:42 /usr/bin/newuidmap

❯_ ~ ls -ls /usr/bin/newgidmap
44 -rwxr-xr-x 1 root root 41088 Sep 7 10:42 /usr/bin/newgidmap

❯_ ~ podman system migrate

❯_ ~ podman unshare cat /proc/self/uid_map

        0       1000          1
        1     100000      65536

❯_ ~ podman run -d -p 3000:3000 heroku/nodejs-hello-world
Error: error creating container storage: could not find enough available IDs

On first time after fix with podman system migrate step, the container works fine, but after stoped it's not working more.

@giuseppe
Copy link
Member

codas:~$ getcap /usr/bin/newuidmap /usr/bin/newgidmap
/usr/bin/newuidmap = cap_setuid+ep
/usr/bin/newgidmap = cap_setgid+ep
codas:~$ unshare -U sleep 1000 &
[1] 509275
codas:~$ newuidmap $! 0 1000 1 1 100000 65536
newuidmap: open of uid_map failed: Permission denied
codas:~$ newgidmap $! 0 1000 1 1 100000 65536
newgidmap: gid range [0-1) -> [1000-1001) not allowed

should I use sudo?

my mistake about newgid it should be: newgidmap $! 0 1001 1 1 100000 65536

but newuidmap failed with EPERM, we need to figure out why that happened. It is not under the Podman control.

Any message in the logs? Can you also share cat /proc/self/mountinfo?

Error: error creating container storage: could not find enough available IDs

can you share the full message? What ID was not found?

@AdsonCicilioti
Copy link

Hi guys,

My Outputs:

❯_ ~ cat /etc/subuid
adson:100000:65536

❯_ ~ cat /etc/subgid
adson:100000:65536

❯_ ~ ls -ls /usr/bin/newuidmap
40 -rwxr-xr-x 1 root root 36992 Sep 7 10:42 /usr/bin/newuidmap

❯_ ~ ls -ls /usr/bin/newgidmap
44 -rwxr-xr-x 1 root root 41088 Sep 7 10:42 /usr/bin/newgidmap

❯_ ~ podman system migrate

❯_ ~ podman unshare cat /proc/self/uid_map

        0       1000          1
        1     100000      65536

❯_ ~ podman run -d -p 3000:3000 heroku/nodejs-hello-world
Error: error creating container storage: could not find enough available IDs

On first time after fix with podman system migrate step, the container works fine, but after stoped it's not working more.

After i run podman system reset and forced remove all lockeds storage dirs/files, all works again.

@andrescodas
Copy link

@giuseppe

but newuidmap failed with EPERM, we need to figure out why that happened. It is not under the Podman control.

Can you suggest how to check the permissions? I included in the commands ls -last so you can check the permissions details

Any message in the logs? Can you also share cat /proc/self/mountinfo?

I'm posting /proc/self/mountinfo let me know if you need other log?

can you share the full message?

I posted /proc/self/mountinfo let me know if it is other message you need

What ID was not found?

I do not know of what ID you are talking about... I didn't see any message talking about a missing ID

codas:~$ getcap /usr/bin/newuidmap /usr/bin/newgidmap
/usr/bin/newuidmap = cap_setuid+ep
/usr/bin/newgidmap = cap_setgid+ep

codas:~$ unshare -U sleep 1000 &
[3] 580141

codas:~$ ls -last $(which newuidmap)
44 -rwsr-xr-x. 1 root root 40632 Aug  7  2020 /usr/bin/newuidmap

codas:~$ newuidmap $! 0 1000 1 1 100000 65536
newuidmap: open of uid_map failed: Permission denied

codas:~$ ls -last $(which newgidmap)
48 -rwsr-xr-x. 1 root root 44760 Aug  7  2020 /usr/bin/newgidmap

codas:~$ newgidmap $! 0 1001 1 1 100000 65536
newgidmap: open of gid_map failed: Permission denied

cat /proc/self/mountinfo > mountinfo.txt

mountinfo.txt

@giuseppe
Copy link
Member

thanks, that was helpful. newuidmap and newgidmap seem to have both setuid and file capabilities.

Does rpm -V shadow-utils report any issue?

Can you reinstall the shadow-utils package?

This is how it looks for me on CentOS 8:

$ ls -last /usr/bin/new?idmap
44 -rwxr-xr-x. 1 root root 44760 Aug 12  2020 /usr/bin/newgidmap
40 -rwxr-xr-x. 1 root root 40632 Aug 12  2020 /usr/bin/newuidmap

I do not know of what ID you are talking about... I didn't see any message talking about a missing ID

sorry that was a question for @AdsonCicilioti

@andrescodas
Copy link

@giuseppe

thank you very much, seems that the re-installation of shadow-utils helped. This is the output just in case:

codas:~$ rpm -V shadow-utils                                                                                                                                                                 [2652/2652]
S.5....T.  c /etc/login.defs                                                                                                                                                                               
.M.......    /usr/bin/newgidmap                                                                                                                                                                            
.M.......    /usr/bin/newuidmap   

codas:~$ sudo dnf reinstall -y shadow-utils
Last metadata expiration check: 1:16:16 ago on Tue 23 Feb 2021 03:32:07 PM -03.
 Package                                       Architecture                            Version                                         Repository                                                      Size
============================================================================================================================================================================================================
  shadow-utils-2:4.6-11.el8.x86_64                                                                                                                                                                         
Transaction Summary
============================================================================================================================================================================================================

Total download size: 1.2 M
Installed size: 3.8 M
Downloading Packages:
shadow-utils-4.6-11.el8.x86_64.rpm                                                                                                                                          489 kB/s | 1.2 MB     00:02    
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                       488 kB/s | 1.2 MB     00:02    
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                    1/1
  Reinstalling     : shadow-utils-2:4.6-11.el8.x86_64                                                                                                                                                   1/2
  Cleanup          : shadow-utils-2:4.6-11.el8.x86_64                                                                                                                                                   2/2
  Running scriptlet: shadow-utils-2:4.6-11.el8.x86_64                                                                                                                                                   2/2
  Verifying        : shadow-utils-2:4.6-11.el8.x86_64                                                                                                                                                   1/2
  Verifying        : shadow-utils-2:4.6-11.el8.x86_64                                                                                                                                                   2/2
Installed products updated.

Reinstalled:
  shadow-utils-2:4.6-11.el8.x86_64                                                                                                                                                                         

Complete!

codas:~$ getcap /usr/bin/newuidmap /usr/bin/newgidmap
/usr/bin/newuidmap = cap_setuid+ep
/usr/bin/newgidmap = cap_setgid+ep

codas:~$ unshare -U sleep 1000 &
[1] 654868

codas:~$ ls -last $(which newuidmap)
44 -rwxr-xr-x. 1 root root 40632 Aug  7  2020 /usr/bin/newuidmap

codas:~$ newuidmap $! 0 1000 1 1 100000 65536

codas:~$ ls -last $(which newgidmap)
48 -rwxr-xr-x. 1 root root 44760 Aug  7  2020 /usr/bin/newgidmap

codas:~$ newgidmap $! 0 1001 1 1 100000 65536

codas:~$ cat /proc/self/mountinfo > mountinfo.txt

codas:~$ rpm -V shadow-utils
S.5....T.  c /etc/login.defs

codas:~$ podman unshare cat /proc/self/uid_map
         0       1000          1
         1     100000      65536

mountinfo.txt

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests