Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any chance to run rootless podman container inside another one? #4056

Closed
psmolkin opened this issue Sep 17, 2019 · 58 comments
Closed
Labels
kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@psmolkin
Copy link

/kind feature

Description

I tried to run rootless podman inside another privileged container. But namespaces mapping doesn't work.

Steps to reproduce the issue:

  1. Run Fedora 30
  2. Install podman
  3. # podman run --privileged --detach --name=test --net=host --security-opt label=disable --security-opt seccomp=unconfined --device /dev/fuse:rw  quay.io/podman/testing sh -c 'tail -f /dev/null'
    # podman exec -it test bash
    
  4.  # groupadd -g 1001 test
    
  5.  # useradd -g 1001 -u 1001 test
    
  6.  # cat /etc/sub?id
     test:100000:65536
     test:100000:65536
    
  7.  # su test
    
  8.  podman unshare cat /proc/self/uid_map
     WARN[0000] using rootless single mapping into the namespace. This might break some 
     images. Check /etc/subuid and /etc/subgid for adding subids
               0       1001          1
    

Describe the results you received:
rootless single mapping

Describe the results you expected:
Something like this

podman unshare cat /proc/self/uid_map
         0       1001     1
         1       100000   65536

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:            1.5.1
RemoteAPI Version:  1
Go Version:         go1.12.7
OS/Arch:            linux/amd64

Output of podman info --debug:

WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids
ERRO[0000] unable to write system event: "write unixgram @000ae->/run/systemd/journal/socket: sendmsg: no such file or directory"
debug:
  compiler: gc
  git commit: ""
  go version: go1.12.7
  podman version: 1.5.1
host:
  BuildahVersion: 1.10.1
  Conmon:
    package: podman-1.5.1-3.fc30.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.0, commit: d728afa06cd2df86a27f32a4692c7099a56acc97-dirty'
  Distribution:
    distribution: fedora
    version: "30"
  MemFree: 320118784
  MemTotal: 2552766464
  OCIRuntime:
    package: runc-1.0.0-93.dev.gitb9b6cc6.fc30.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8+dev
      commit: e3b4c1108f7d1bf0d09ab612ea09927d9b59b4e3
      spec: 1.0.1-dev
  SwapFree: 2191708160
  SwapTotal: 2206199808
  arch: amd64
  cpus: 4
  eventlogger: journald
  hostname: 172.17.0.183
  kernel: 5.2.14-200.fc30.x86_64
  os: linux
  rootless: true
  uptime: 3h 20m 18.65s (Approximately 0.12 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /home/test/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: overlay
  GraphOptions:
  - overlay.mount_program=/usr/bin/fuse-overlayfs
  GraphRoot: /home/test/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: overlayfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 0
  RunRoot: /tmp/run-1001
  VolumePath: /home/test/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

Name         : podman
Epoch        : 2
Version      : 1.5.1
Release      : 3.fc30
Architecture : x86_64
Size         : 54 M
Source       : podman-1.5.1-3.fc30.src.rpm
Repository   : @System
From repo    : updates
Summary      : Manage Pods, Containers and Container Images
URL          : https://podman.io/
License      : ASL 2.0

Additional environment details (AWS, VirtualBox, physical, etc.):
Hyper-V

@openshift-ci-robot openshift-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 17, 2019
@mheon
Copy link
Member

mheon commented Sep 17, 2019

This should be theoretically possible, but I don't think anyone has successfully achieved it.

@giuseppe @rhatdan There seems to be a fair bit of interest in this, so we might want to look into what it would take / writing a tutorial on how to do it.

@TomSweeneyRedHat
Copy link
Member

We're probably a bit closer with upstream/1.6.0 with crun in play, but I think there are still some hiccups.

@giuseppe
Copy link
Member

looks like newuidmap/newgidmap don't get enough privileges to setup the namespace.

What is the result of getcap /usr/bin/newuidmap?

In case that is empty, you may try with chmod +s /usr/bin/newgidmap /usr/bin/newgidmap

I am afraid the new*map programs miss the file capabilities, either because of the way Fedora images are built, or because they don't work correctly within overlayfs

@psmolkin
Copy link
Author

psmolkin commented Sep 18, 2019

@giuseppe Yes I already tried to do this, based on your other comment . But unfortunately that didn’t change anything.

@giuseppe
Copy link
Member

I've tried similar steps to yours and it seems to work fine:

#podman run --privileged  --name=test --net=host --security-opt label=disable --security-opt seccomp=unconfined --device /dev/fuse:rw  --rm -ti fedora sh
# yum install -y podman crun
# chmod +s /usr/bin/newgidmap /usr/bin/newgidmap
# groupadd -g 1001 test && useradd -g 1001 -u 1001 test
# su test
$ podman --cgroup-manager cgroupfs unshare cat /proc/self/uid_map
         0       1001          1
         1     100000      65536

so it must be something else going wrong

@giuseppe
Copy link
Member

is there any pause process running inside the container? Could you try podman system migrate && odman --cgroup-manager cgroupfs unshare cat /proc/self/uid_map as rootless?

@giuseppe
Copy link
Member

is there any pause process running inside the container? Could you try podman system migrate && odman --cgroup-manager cgroupfs unshare cat /proc/self/uid_map as rootless?

@psmolkin had a chance to try it out?

@psmolkin
Copy link
Author

@giuseppe
I apologize for the long reply.
So. I tried to install crun and change default runtime at /usr/share/containers/libpod.conf
I tried to do the same in the container. And ran system migrate but nothing changed

$ podman system migrate && podman --log-level debug --cgroup-manager cgroupfs unshare cat /proc/self/uid_map
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids
INFO[0000] running as rootless
DEBU[0000] using conmon: "/usr/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /home/test/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/test/.local/share/containers/storage
DEBU[0000] Using run root /tmp/run-1001
DEBU[0000] Using static dir /home/test/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/run-1001/libpod/tmp
DEBU[0000] Using volume path /home/test/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] backingFs=overlayfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/bin/crun"
DEBU[0000] using runtime "/usr/bin/runc"
         0       1001          1
$ cat /etc/sub?id
test:100000:65536
test:100000:65536

@johanbrandhorst
Copy link

I'm also trying to get to work, with the aim to eventually be able to run automated test suites that start local containers within an unprivileged docker or podman container. I'm able to get this far, with both archlinux/base and fedora bases.

After installing podman and confirming podman info works, this is what I get when trying to run a container:

# podman run --rm -it ubuntu
ERRO[0000] unable to write system event: "write unixgram @00045->/run/systemd/journal/socket: sendmsg: no such file or directory" 
Trying to pull docker.io/library/ubuntu...
Getting image source signatures
Copying blob 5667fdb72017 done
Copying blob d83811f270d5 done
Copying blob ee671aafb583 done
Copying blob 7fc152dfb3a6 done
Copying config 2ca708c1c9 done
Writing manifest to image destination
Storing signatures
ERRO[0007] unable to write pod event: "write unixgram @00045->/run/systemd/journal/socket: sendmsg: no such file or directory" 
ERRO[0007] error creating network namespace for container fc189c2fb049f6d0955773f86245d7394e0a35181ca97c23782e4b17f8f66fba: mount --make-rshared /var/run/netns failed: "operation not permitted" 
ERRO[0007] unable to write pod event: "write unixgram @00045->/run/systemd/journal/socket: sendmsg: no such file or directory" 
Error: failed to mount shm tmpfs "/home/REDACTED/.local/share/containers/storage/vfs-containers/fc189c2fb049f6d0955773f86245d7394e0a35181ca97c23782e4b17f8f66fba/userdata/shm": operation not permitted

The basic steps I'm following:

  1. Install podman on local, bare metal machine
  2. Start a container with easy podman installation available (archlinux/base, fedora).
  3. Install podman
  4. Configure podman to use vfs since I was getting overlay errors
    Sidenote: I think this is because my bare metal podman installation is configured with vfs.
    The error I'm seeing is:
    # podman info
    ERRO[0000] 'overlay' is not supported over extfs at "/var/lib/containers/storage/overlay" 
    Error: could not get runtime: kernel does not support overlay fs: 'overlay' is not supported over extfs at "/var/lib/containers/storage/overlay": backing file system is unsupported for this graph driver
    
  5. Run container from within container (see log above)

Am I missing something? I'm testing this locally with podman on bare metal, but the environment I'm really targeting is docker on CircleCI.

@mheon
Copy link
Member

mheon commented Sep 26, 2019

You'll probably want to run the outer container with either --privileged or --security-opt seccomp=unconfined

@mheon
Copy link
Member

mheon commented Sep 26, 2019

(I think seccomp will block the mount calls otherwise)

@johanbrandhorst
Copy link

Thanks for the tip, that does unfortunately defeat the whole point :(. Is there any chance this will be possible without --privileged eventually?

@mheon
Copy link
Member

mheon commented Sep 26, 2019

Not without changes to the Seccomp profile (and potentially other things) - Seccomp blocks a lot of things (like the mount calls I mentioned) that we need to continue setup.

@johanbrandhorst
Copy link

https://stackoverflow.com/a/56856410 might be useful for this discussion too.

@rhatdan
Copy link
Member

rhatdan commented Sep 29, 2019

Could you try to remove seccomp. The seccomp.json that Docker ships blocks the mount syscall, even if it was deemed safe by the kernel. IE non privileged mount is allowed for procfs/sysfs/bind mounts and fuse-mounts for non privileged users but it requires the mount syscall.

The seccomp.json that we ship with Podman allows the mount syscall. You might need a couple of other syscalls that Docker blocks.

Might be other issues as well.

You could try to run podman within podman and see if this works.

@github-actions
Copy link

This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days.

@mheon
Copy link
Member

mheon commented Oct 30, 2019

We may want a tracker issue for this. I think we have 3-4 open issues about this.

@vrothberg
Copy link
Member

We may want a tracker issue for this. I think we have 3-4 open issues about this.

@rhatdan, I believe you're working on this at the moment. Would you open a tracker issue?

@github-actions github-actions bot closed this as completed Nov 7, 2019
@chengkuangan
Copy link

I'm also trying to get to work, with the aim to eventually be able to run automated test suites that start local containers within an unprivileged docker or podman container. I'm able to get this far, with both archlinux/base and fedora bases.

After installing podman and confirming podman info works, this is what I get when trying to run a container:

# podman run --rm -it ubuntu
ERRO[0000] unable to write system event: "write unixgram @00045->/run/systemd/journal/socket: sendmsg: no such file or directory" 
Trying to pull docker.io/library/ubuntu...
Getting image source signatures
Copying blob 5667fdb72017 done
Copying blob d83811f270d5 done
Copying blob ee671aafb583 done
Copying blob 7fc152dfb3a6 done
Copying config 2ca708c1c9 done
Writing manifest to image destination
Storing signatures
ERRO[0007] unable to write pod event: "write unixgram @00045->/run/systemd/journal/socket: sendmsg: no such file or directory" 
ERRO[0007] error creating network namespace for container fc189c2fb049f6d0955773f86245d7394e0a35181ca97c23782e4b17f8f66fba: mount --make-rshared /var/run/netns failed: "operation not permitted" 
ERRO[0007] unable to write pod event: "write unixgram @00045->/run/systemd/journal/socket: sendmsg: no such file or directory" 
Error: failed to mount shm tmpfs "/home/REDACTED/.local/share/containers/storage/vfs-containers/fc189c2fb049f6d0955773f86245d7394e0a35181ca97c23782e4b17f8f66fba/userdata/shm": operation not permitted

The basic steps I'm following:

  1. Install podman on local, bare metal machine
  2. Start a container with easy podman installation available (archlinux/base, fedora).
  3. Install podman
  4. Configure podman to use vfs since I was getting overlay errors
    Sidenote: I think this is because my bare metal podman installation is configured with vfs.
    The error I'm seeing is:
    # podman info
    ERRO[0000] 'overlay' is not supported over extfs at "/var/lib/containers/storage/overlay" 
    Error: could not get runtime: kernel does not support overlay fs: 'overlay' is not supported over extfs at "/var/lib/containers/storage/overlay": backing file system is unsupported for this graph driver
    
  5. Run container from within container (see log above)

Am I missing something? I'm testing this locally with podman on bare metal, but the environment I'm really targeting is docker on CircleCI.

I am using Docker, I do this in my Dockerfile. I do a build for go and libpod from scratch during docker build and also set the events_logger to file. The error gone away. But I have another issue similar to this reported issue.

RUN sed -i 's/# events_logger = "journald"/events_logger = "file"/g' $GOPATH/src/github.com/containers/libpod/libpod.conf

RUN cp /var/go/src/github.com/containers/libpod/libpod.conf /etc/containers/

@rhatdan
Copy link
Member

rhatdan commented Mar 25, 2020

Currently this requires a privileged container and it requires you to mount a different volume on /var/lib/containers/

@FlorianLudwig
Copy link

Currently this requires a privileged container and it requires you to mount a different volume on /var/lib/containers/

So I would assume this to work:

podman run --privileged --rm -ti --net=host --security-opt label=disable --security-opt seccomp=unconfined -v ~/tmp_container:/var/lib/containers/ fedora:31 sh -c "dnf install -y podman && podman info"

but it doesn't:

ERRO[0000] 'overlay' is not supported over extfs at "/var/lib/containers/storage/overlay" 
Error: could not get runtime: kernel does not support overlay fs: 'overlay' is not supported over extfs at "/var/lib/containers/storage/overlay": backing file system is unsupported for this graph driver

@rhatdan
Copy link
Member

rhatdan commented Apr 9, 2020

Actually to get this to work, you would need to use fuse-overlay, since you are not allowed to use overlay as non root.
YOu could also try with a storage driver of vfs, and see if this works.

@rhatdan
Copy link
Member

rhatdan commented Apr 9, 2020

This works
podman run -ti --cap-add SYS_ADMIN --device /dev/fuse quay.io/podman/stable podman info

@psmolkin
Copy link
Author

@rhatdan thanks for updates!
Which host systems should I use?

@rhatdan
Copy link
Member

rhatdan commented Apr 10, 2020

Don't really understand the question?

@FlorianLudwig
Copy link

FlorianLudwig commented Apr 13, 2020

@rhatdan

thank you for your patients :) !

podman run -ti --cap-add SYS_ADMIN --device /dev/fuse quay.io/podman/stable podman info

does indeed work! But trying to execute anything fails, with networking errors:

podman run -ti --cap-add SYS_ADMIN --device /dev/fuse quay.io/podman/stable podman run hello-world
Trying to pull docker.io/library/hello-world...

Getting image source signatures
Copying blob 1b930d010525 done
Copying config fce289e99e done
Writing manifest to image destination
Storing signatures
ERRO[0005] Error adding network: operation not permitted 
ERRO[0005] Error while adding to cni lo network: operation not permitted 
Error: error configuring network namespace for container 4f6cdd985ee9c0adeec364425ad8f19bbc07de00cf0ca2b3773dc61aba7cc256: operation not permitted

Or:

$ podman run --net=none -ti --security-opt label=disable --security-opt seccomp=unconfined -v ~/tmp_container:/var/lib/containers/ --cap-add SYS_ADMIN --device /dev/fuse quay.io/podman/stable podman run --net=none hello-world
ERRO[0000] Error deleting network: neither iptables nor ip6tables usable 
ERRO[0000] Error while removing pod from CNI network "podman": neither iptables nor ip6tables usable 
ERRO[0000] Error refreshing container 120d2572059087516d0bac18a8cdbab99d86a3419e1d8228c2cbfc830bad00ef: neither iptables nor ip6tables usable 
ERRO[0000] Error deleting network: neither iptables nor ip6tables usable 
ERRO[0000] Error while removing pod from CNI network "podman": neither iptables nor ip6tables usable 
ERRO[0000] Error refreshing container 2177e54067c62c702d7e0675287f23d0e7c8b23d45960740f30183f543001ab2: neither iptables nor ip6tables usable 
ERRO[0000] Error deleting network: neither iptables nor ip6tables usable 
ERRO[0000] Error while removing pod from CNI network "podman": neither iptables nor ip6tables usable 
ERRO[0000] Error refreshing container 2fc722295834ee74c63a0b62fe49f7d6699d5ddc78c210ff6638086785b0c38f: neither iptables nor ip6tables usable 
ERRO[0000] Error deleting network: neither iptables nor ip6tables usable 
ERRO[0000] Error while removing pod from CNI network "podman": neither iptables nor ip6tables usable 
ERRO[0000] Error refreshing container 3bc002079e65317e9dffe81cc3490787a01f50c45e8923f9b8e0e40daf54fb56: neither iptables nor ip6tables usable 
ERRO[0000] Error deleting network: neither iptables nor ip6tables usable 
ERRO[0000] Error while removing pod from CNI network "podman": neither iptables nor ip6tables usable 
ERRO[0000] Error refreshing container 4a546bc670f756c94bc3a8ca275025076da9ea473efafea9e3a3d63dcd7ac40d: neither iptables nor ip6tables usable 
ERRO[0000] Error deleting network: neither iptables nor ip6tables usable 
ERRO[0000] Error while removing pod from CNI network "podman": neither iptables nor ip6tables usable 
ERRO[0000] Error refreshing container 946dfcb6cfd0ddf496e6b9e7e910c3735f9344a17d1a6f8bb425175b90a9f7ad: neither iptables nor ip6tables usable 
ERRO[0000] Error deleting network: neither iptables nor ip6tables usable 
ERRO[0000] Error while removing pod from CNI network "podman": neither iptables nor ip6tables usable 
ERRO[0000] Error refreshing container c9e7e82bb08ba5e6eac12db1fe8062cb57c45c40a7038c583f8b0a024d9e0361: neither iptables nor ip6tables usable 
Error: setrlimit `RLIMIT_NOFILE`: Operation not permitted: OCI runtime permission denied error

Tried --net=none and --net=host both fail with an iptables error.

EDIT: Copied the wrong second example and clarified my question.

@mgoltzsche
Copy link
Contributor

mgoltzsche commented Dec 28, 2020

The newuidmap error in rootless mode vanishes if you assign a bigger subuid/subgid range on your host as you pointed out previously, e.g.:

sudo sh -c "echo $(id -un):100000:200000 >> /etc/subuid"
sudo sh -c "echo $(id -gn):100000:200000 >> /etc/subgid"

The file system error in my environment happens due to #8849: The error disappears when I

  • mount a host directory as storage directory into the container and
  • set --security-opt seccomp=unconfined.

Though, when using docker to run the outer container, this is not necessary since --privileged is sufficient.
I'd expect the same behaviour from podman.

@dustymabe
Copy link
Contributor

I just had some success here. This is with podman-4.2.0-2.fc37.x86_64 on Fedora CoreOS next stream version 37.20220910.1.0.

[core@weevm ~]$ podman run --rm --privileged -u podman:podman quay.io/podman/stable podman run --rm docker.io/alpine echo hello from nested container
Trying to pull quay.io/podman/stable:latest...
Getting image source signatures
Copying blob 278e7a304533 done  
Copying blob be14bb595350 done  
Copying blob 62946078034b done  
Copying blob 2b218512437b done  
Copying blob edaf758377ad done  
Copying blob 0fb00f3482eb done  
Copying blob 457330bd6fd1 done  
Copying blob 4b95b3482e2b done  
Copying config ba7f403b92 done  
Writing manifest to image destination
Storing signatures
time="2022-09-13T17:20:05Z" level=warning msg="\"/\" is not a shared mount, this could cause issues or missing mounts with rootless containers"
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob sha256:213ec9aee27d8be045c6a92b7eac22c9a64b44558193775a1a7f626352392b49
Copying config sha256:9c6f0724472873bb50a2ae67a9e7adcb57673a183cea8b06eb778dca859181b5
Writing manifest to image destination
Storing signatures
hello from nested container
[core@weevm ~]$ rpm -q podman
podman-4.2.0-2.fc37.x86_64
[core@weevm ~]$ cat /etc/subuid
core:100000:65536
[core@weevm ~]$ cat /etc/subgid
core:100000:65536

dustymabe added a commit to dustymabe/coreos-assembler that referenced this issue Sep 13, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26
dustymabe added a commit to dustymabe/coreos-assembler that referenced this issue Sep 13, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26
dustymabe added a commit to dustymabe/coreos-assembler that referenced this issue Sep 13, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26
dustymabe added a commit to dustymabe/coreos-assembler that referenced this issue Sep 13, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26
dustymabe added a commit to dustymabe/coreos-assembler that referenced this issue Sep 13, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26
dustymabe added a commit to dustymabe/coreos-assembler that referenced this issue Sep 14, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26
dustymabe added a commit to coreos/coreos-assembler that referenced this issue Sep 14, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26
jlebon pushed a commit to jlebon/coreos-assembler that referenced this issue Nov 7, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
jlebon pushed a commit to jlebon/coreos-assembler that referenced this issue Nov 7, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
jlebon pushed a commit to jlebon/coreos-assembler that referenced this issue Nov 8, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
(cherry picked from commit a76d27d)
dustymabe added a commit to coreos/coreos-assembler that referenced this issue Nov 9, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
dustymabe added a commit to coreos/coreos-assembler that referenced this issue Nov 9, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
(cherry picked from commit a76d27d)
jlebon pushed a commit to dustymabe/coreos-assembler that referenced this issue Nov 10, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
(cherry picked from commit a76d27d)
(cherry picked from commit 1d60fda)
jlebon pushed a commit to dustymabe/coreos-assembler that referenced this issue Nov 10, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
(cherry picked from commit a76d27d)
(cherry picked from commit 1d60fda)
jlebon pushed a commit to dustymabe/coreos-assembler that referenced this issue Nov 10, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
(cherry picked from commit a76d27d)
(cherry picked from commit 1d60fda)
dustymabe added a commit to dustymabe/coreos-assembler that referenced this issue Nov 10, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
(cherry picked from commit a76d27d)
(cherry picked from commit 1d60fda)
dustymabe added a commit to dustymabe/coreos-assembler that referenced this issue Nov 13, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
(cherry picked from commit a76d27d)
(cherry picked from commit 1d60fda)
dustymabe added a commit to dustymabe/coreos-assembler that referenced this issue Nov 15, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
(cherry picked from commit a76d27d)
(cherry picked from commit 1d60fda)
dustymabe added a commit to coreos/coreos-assembler that referenced this issue Nov 15, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
(cherry picked from commit a76d27d)
(cherry picked from commit 1d60fda)
dustymabe added a commit to dustymabe/coreos-assembler that referenced this issue Dec 1, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
dustymabe added a commit to coreos/coreos-assembler that referenced this issue Dec 2, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
jlebon pushed a commit to coreos/coreos-assembler that referenced this issue Dec 2, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
(cherry picked from commit a76d27d)
jlebon pushed a commit to coreos/coreos-assembler that referenced this issue Dec 2, 2022
We do *some* podman operations inside the COSA container. If running
locally as the `builder` user podman will barf when trying to run
newuidmap if we don't change up the subuid/subgid mappings.

With this change we'll be able to test in our local rootless podman
COSA container that `cosa push-container-manifest` works.

In order to figure out this worked (at least for what limited podman
manifest commands I'm running) I first followed the issue at [1]
and realized I had success with the `quay.io/podman/stable` image
and then looked inside the image to see what the mapping was.
I then lifted the mapping from there [2] and applied it here and
it works.

Note that inside the pipeline right now (in OpenShift) we still run
as a random user but that seems to still be working OK for us for
pushing the manifest because it can't find the random UID/GID in
/etc/{subuid,subgid} so it falls back to "rootless single mapping
into the namespace".

[1] containers/podman#4056 (comment)
[2] https://github.com/containers/podman/blob/6e382d9ec2e6eb79a72537544341e496368b6c63/contrib/podmanimage/stable/Containerfile#L25-L26

(cherry picked from commit 5ffbf12)
(cherry picked from commit a76d27d)
(cherry picked from commit 1d60fda)
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 15, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 15, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
Development

No branches or pull requests