-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error creating libpod runtime: there might not be enough IDs available in the namespace #3421
Comments
You don't need to use Just running Podman as a non-root user, no extra arguments or special flags (but with a configured |
And to provide further clarity on why it fails - |
Depends on how you want to use it... There's no requirement that the user running in the container must match the user who ran Podman. However, if you have volumes in the container, and you need to access them from the host, you generally will need to ensure the UIDs match. (Alternatively, you can use Technically, you'll also need 3 UID maps... One for UIDs below 23, one for 23 itself, one for UIDs about 23. Because of this, we generally recommend just running the service in the container as UID 0 - it's not really root, it's the user that launched the container, so you don't give up anything in terms of security. |
ok thanks that got me past that error but now im running rootless and getting image related errors. podman run -v /home/meta/backup:/root/backup -dt docker.io/centos:latest sleep 100 note: im using the fully qualified path here because without it i get another type of error. podman run -v /home/meta/backup:/root/backup -dt docker.io/centos:latest sleep 100 WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids |
There's your problem. Do you have |
no the directions at https://github.com/containers/libpod/blob/master/install.md didnt say to do this cat /etc/centos-release shall i follow these directions ? https://www.scrivano.org/2018/10/12/rootless-podman-from-upstream-on-centos-7/ |
In addition when i create the directory manually i cannot exec into the container... after running mkdir ./backup and then the container can be seen as running with but when i try to exec.. podman exec -ti -l bash |
@giuseppe Any idea about that exit status out of runc? Sounds like something we might have fixed in a more recent version. |
RE: the Docker issue - I'll look into this tomorrow. If we're not matching Docker, that's definitely a bug. |
thanks, ill check back tomorrow sometime. fyi my requirement is to be able to run rootless here is docker version... docker version podman version |
We explicitly decided not to follow Docker on this one. Creating a bind mount volume on the host when it does not exist. I believe that this is a bug in Docker, since it could lead to user typos, being ignored an unexpected directories/volumes being created. |
There are other flags in the kernel that need to be set to use User Namespace on RHEL7/Centos 7. |
I see different issues here. The blog post I've written some time ago seems outdated, I'll need to write another one. So the first thing: newuidmap/newgidmap seems to be missing, you'll need to install them, or most images won't work (same issue as #3423). You need to update runc, since the version you are using has different issues with rootless containers, .e.g. it will complain about Currently upstream podman is broken for RHEL 7.5, the issue is being addressed with #3397 |
I have podman working on my normal host, but today when I went to try it on a different host I saw the "not enough IDs available" error mentioned here. I must be forgetting a step that I ran on the other host, so if we could put together a pre-flight checklist that would be helpful. Off the top of my head here are the things I checked:
What am I forgetting? Is there something I can run to pinpoint the issue? |
Is the image requesting an ID over 65k? Some images do include UIDs in the million range - those can break even for properly configured rootless. |
I don't think so, it said |
Does podman unshare work? podman unshare cat /proc/self/uid_map |
Yes, I think so:
|
That indicates that the user executing podman unshare only has one UID 12345 |
Did a bit more snooping, looks like the podman log level is not set early enough, so the newuidmap debug output is getting swallowed. I built a binary with that log level bumped up and this is the error that causes the issue:
|
Permissions issue on the binary? |
I'll tag @giuseppe in case it isn't that - he might have some ideas |
Binary is readable/executable and runs fine, but it looks like it's owned by a user other than root:root (we deploy packages differently to that host). Is it required for it to be root:root to do its magic? Also, is there any way to detect that the newuidmap version is too old? I have a colleague who ran into an issue with his PATH so it was falling back to the system newuidmap, and something other than an EPERM would have been nice. |
yes, newuidmap/newgidmap must be owned by root and it must either have fcaps enabled or installed as setuid. |
So long story short I need to use RHEL 8? |
that will surely help as all the needed pieces are there, including an updated kernel where you can use fuse-overlayfs. |
getcap /usr/bin/newuidmap If this is not set then this will not work. |
Is there a Podman-Compose? How do i run the same container/container images iterated over in Dev with Podman and Buildah with a deployment to Amazon ECS, Azure AKS or IBM IKS? |
@juansuerogit you can use |
I got similar errors, even with correctly configured /etc/subuid and /etc/subgid. Turns out, there's a known issue/bug when your home directory is on NFS. Try something like:
|
NFS homedirs are covered in the troubleshooting guide. |
To clarify, the machine on which I encountered this definitely had no NFS-related anything installed or running. |
I'm running on rhel 8.3 I tried to follow your instructions but I still get:
this is my output:
Can someone help me figure out what am I missing? |
Have you tried running ‘podman system migrate’?
…On Sat, Feb 20, 2021 at 19:36 Andres Codas ***@***.***> wrote:
I'm running on rhel 8.3
I tried to follow your instructions but I still get:
there might not be enough IDs available in the namespace (requested 0:42 for /etc/gshadow): lchown /etc/gshadow: invalid argument
this is my output:
codas:~$ cat /etc/subuid
codas:100000:65536
codas:~$ cat /etc/subgid
codas:100000:65536
codas:~$ ls -ls /usr/bin/newuidmap
44 -rwsr-xr-x. 1 root root 40632 Aug 7 2020 /usr/bin/newuidmap
codas:~$ ls -ls /usr/bin/newgidmap
48 -rwsr-xr-x. 1 root root 44760 Aug 7 2020 /usr/bin/newgidmap
codas:~$ podman system migrate
codas:~$ podman unshare cat /proc/self/uid_map
0 1000 1
Can someone help me figure out what am I missing?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3421 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3AOCAPFIISYRAZXD3AKIDTABIO7ANCNFSM4H3CRJCQ>
.
|
@giuseppe Any ideas? It looks like everything should be in order here. |
can you show the output of |
also any reason to use CentOS 7.5 and don't move to 8? |
I'm not running CentOS
|
thanks, can you tell what you get with:
|
should I use sudo? |
Hi guys, My Outputs: ❯_ ~ cat /etc/subuid ❯_ ~ cat /etc/subgid ❯_ ~ ls -ls /usr/bin/newuidmap ❯_ ~ ls -ls /usr/bin/newgidmap ❯_ ~ podman system migrate ❯_ ~ podman unshare cat /proc/self/uid_map
❯_ ~ podman run -d -p 3000:3000 heroku/nodejs-hello-world On first time after fix with |
my mistake about newgid it should be: but newuidmap failed with EPERM, we need to figure out why that happened. It is not under the Podman control. Any message in the logs? Can you also share
can you share the full message? What ID was not found? |
After i run |
Can you suggest how to check the permissions? I included in the commands
I'm posting
I posted
I do not know of what ID you are talking about... I didn't see any message talking about a missing ID
|
thanks, that was helpful. newuidmap and newgidmap seem to have both setuid and file capabilities. Does Can you reinstall the shadow-utils package? This is how it looks for me on CentOS 8:
sorry that was a question for @AdsonCicilioti |
thank you very much, seems that the re-installation of shadow-utils helped. This is the output just in case:
|
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I have RHEL servers in the 7.x range ( i think they are 7.4 or 7.5 ) that we currently run containers on with docker-compose. Went to a Red Hat conference and learned about Podman so want to use Podman in production to help us get away from the big fat deamons and not to run containers as root.
To that end i have created a centos 7.5 VM on my laptop and installed podman. But i cannot seem to get the uidmap functionality to work.
Im hoping that once we solve this uidmap bug im encountering that we can then take this and run it on RHEL 7.4 server.
On the RHEL 7.4 we can only operate as a regular user so we need to figure out rootless podman.
I understand that some changes to the OS are needed and we need adminstrative control to do this. Like the subuid and subgid and the kernal params to enable user namespaces. we can do that. but on a day to day basis including running the production containers we have to be able to run rootless podman and backup and recover the files as the same regular user ( not root )
In addition im not sure how to map an existing user on the container image
for example mongod ( the mongodb user )
to the regular server user. but thats maybe getting ahead of ourselves.
Steps to reproduce the issue:
Describe the results you received:
Error: error creating libpod runtime: there might not be enough IDs available in the namespace (requested 100000:100000 for /home/meta/.local/share/containers/storage/vfs): chown /home/meta/.local/share/containers/storage/vfs: invalid argument
Describe the results you expected:
I expected a pod / container which would be running and i could exec into it and
create files inside the container as user root
upon exiting the container i expect those files to be owned by user "meta"
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Additional environment details (AWS, VirtualBox, physical, etc.):
Centos 7.5 VM
sudo yum -y update && sudo yum install -y podman
sudo echo 'user.max_user_namespaces=15076' >> /etc/sysctl.conf
sudo echo 'meta:100000:65536' >> /etc/subuid
sudo echo 'meta:100000:65536' >> /etc/subgid
sudo reboot
podman run -dt --uidmap 0:100000:500 ubuntu sleep 1000
The text was updated successfully, but these errors were encountered: