Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

userns_size has no effect #16562

Closed
anotherwon opened this issue Nov 19, 2022 · 5 comments · Fixed by containers/common#1238
Closed

userns_size has no effect #16562

anotherwon opened this issue Nov 19, 2022 · 5 comments · Fixed by containers/common#1238
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@anotherwon
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Setting userns_size in containers.conf has no effect.

Steps to reproduce the issue:

  1. /etc/containers/containers.conf:
[containers]
userns = "auto"
userns_size = 65536
  1. podman run --rm -it alpine cat /proc/self/uid_map

Describe the results you received:

         0          1       1024

Describe the results you expected:

         0          1      65536

Additional information you deem important (e.g. issue happens only occasionally):
podman run --rm -it --userns auto:size=65536 alpine cat /proc/self/uid_map
works, but I need this to be applied automatically.

Output of podman version:

Client:       Podman Engine
Version:      4.3.1
API Version:  4.3.1
Go Version:   go1.19.3
Git Commit:   814b7b003cc630bf6ab188274706c383f9fb9915-dirty
Built:        Thu Nov 10 15:59:17 2022
OS/Arch:      linux/amd64

Output of podman info:

host:
  arch: amd64
  buildahVersion: 1.28.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: /usr/bin/conmon is owned by conmon 1:2.1.5-1
    path: /usr/bin/conmon
    version: 'conmon version 2.1.5, commit: c9f7f19eb82d5b8151fc3ba7fbbccf03fdcd0325'
  cpuUtilization:
    idlePercent: 81.52
    systemPercent: 8.3
    userPercent: 10.19
  cpus: 16
  distribution:
    distribution: arch
    version: unknown
  eventLogger: journald
  hostname: xxx
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 655360
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 655360
  kernel: 6.0.9-arch1-1
  linkmode: dynamic
  logDriver: journald
  memFree: 14247645184
  memTotal: 31256666112
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: /usr/bin/crun is owned by crun 1.7-1
    path: /usr/bin/crun
    version: |-
      crun version 1.7
      commit: 40d996ea8a827981895ce22886a9bac367f87264
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.0-1
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 12884897792
  swapTotal: 12884897792
  uptime: 18h 30m 48.00s (Approximately 0.75 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /home/anotherwon/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: btrfs
  graphOptions: {}
  graphRoot: /home/anotherwon/.local/share/containers/storage
  graphRootAllocated: 300647710720
  graphRootUsed: 288446124032
  graphStatus:
    Build Version: Btrfs v6.0.1
    Library Version: "102"
  imageCopyTmpDir: /tmp
  imageStore:
    number: 65
  runRoot: /run/user/1000/containers
  volumePath: /home/anotherwon/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 1668092357
  BuiltTime: Thu Nov 10 15:59:17 2022
  GitCommit: 814b7b003cc630bf6ab188274706c383f9fb9915-dirty
  GoVersion: go1.19.3
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1


Package info (e.g. output of rpm -q podman or apt list podman or brew info podman):

$ pacman -Q podman
podman 4.3.1-1

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 19, 2022
@rhatdan
Copy link
Member

rhatdan commented Nov 20, 2022

Your right, it does not look like it is used within the code. We allocate user based on the minimum number of UIDs needed for the image * 2, I believe.

@giuseppe Did you plan on using this field, or should it just be removed?

@giuseppe
Copy link
Member

I was not aware we had that setting. It was added as part of containers/common@bd0a08c but it was never plugged into Podman

@giuseppe
Copy link
Member

after looking at it, I think it is better to drop it and allow customizations through the userns setting itself, e.g.

[containers]
userns = "auto:size=8191"

Even if we maintain userns_size we'd still need to change its default value because it is not hardcoded to 65536 thus making not possible to know whether it was set in the configuration or it is the default value being used.

rhatdan added a commit to rhatdan/common that referenced this issue Nov 21, 2022
Podman and Buildah do not use this field, and I
know of no users of it, remove it from docs and
the default conf file, so users will not expect
it to do anything.

Leaving implementation in the slight chance someone
has used it in a non containers project.

Fixes: containers/podman#16562

Signed-off-by: Daniel J Walsh <[email protected]>
@rhatdan
Copy link
Member

rhatdan commented Nov 21, 2022

I think we should just remove the documentation of it, so I opened a PR.

@giuseppe
Copy link
Member

#16571

rhatdan added a commit to rhatdan/common that referenced this issue Nov 21, 2022
Podman and Buildah do not use this field, and I
know of no users of it, remove it from docs and
the default conf file, so users will not expect
it to do anything.

Leaving implementation in the slight chance someone
has used it in a non containers project.

Fixes: containers/podman#16562

Signed-off-by: Daniel J Walsh <[email protected]>
rhatdan added a commit to rhatdan/common that referenced this issue Nov 21, 2022
Podman and Buildah do not use this field, and I
know of no users of it, remove it from docs and
the default conf file, so users will not expect
it to do anything.

Leaving implementation in the slight chance someone
has used it in a non containers project.

Fixes: containers/podman#16562

Signed-off-by: Daniel J Walsh <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 9, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 9, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants