Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If you run podman -d containers in a loop, some containers try to use existing names #11735

Closed
ericcurtin opened this issue Sep 24, 2021 · 12 comments · Fixed by #12137
Closed
Assignees
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@ericcurtin
Copy link
Contributor

ericcurtin commented Sep 24, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Error: error creating container storage: the container name "frosty_thompson" is already in use by "22a61c8f4ae4192e2070d519257c00f8b29a2eb56c06ff4493d0e77a26f0ef88". You have to remove that container to be able to reuse that name.: that name is already in use

Steps to reproduce the issue:

  1. Run something like for i in $(seq 1 100); do podman run -d fedora bash; done, the more iterations, the more likely it is to happen.

Describe the results you received:

Container creation will fail because of attempt to use duplicate name. My workaround is to use --name $(uuidgen) with podman but that's just a workaround.

Describe the results you expected:

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:      3.3.1
API Version:  3.3.1
Go Version:   go1.16.6
Built:        Mon Aug 30 21:46:36 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.22.3
  cgroupControllers: []
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.29-2.fc34.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: '
  cpus: 12
  distribution:
    distribution: fedora
    version: "34"
  eventLogger: journald
  hostname: curtine-ThinkPad-P1-Gen-3
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.13.16-200.fc34.x86_64
  linkmode: dynamic
  memFree: 26588057600
  memTotal: 33398661120
  ociRuntime:
    name: crun
    package: crun-1.0-1.fc34.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.0
      commit: 139dc6971e2f1d931af520188763e984d6cdfbf8
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc34.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 1h 14m 33.05s (Approximately 0.04 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/curtine/.config/containers/storage.conf
  containerStore:
    number: 100
    paused: 0
    running: 100
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/curtine/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 2
  runRoot: /run/user/1000/containers
  volumePath: /home/curtine/.local/share/containers/storage/volumes
version:
  APIVersion: 3.3.1
  Built: 1630356396
  BuiltTime: Mon Aug 30 21:46:36 2021
  GitCommit: ""
  GoVersion: go1.16.6
  OsArch: linux/amd64
  Version: 3.3.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.3.1-1.fc34.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

No

Additional environment details (AWS, VirtualBox, physical, etc.):

Fedora 34 physical

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 24, 2021
@Luap99
Copy link
Member

Luap99 commented Sep 24, 2021

I cannot reproduce. I increased the number but it never failed for me.

@flouthoc
Copy link
Collaborator

@Luap99 @ericcurtin Since we have finite number of first and last names this can always happen if you run this in a loop i believe we can increase randomness by this patch but even that also has a chances of collision.

@ericcurtin For your use case using custom $UUID is much better since you are spawning so many containers names don't make sense at all here ( which you are already doing )

--- a/libpod/runtime.go
+++ b/libpod/runtime.go
@@ -913,7 +913,7 @@ func (r *Runtime) Info() (*define.Info, error) {
 // generateName generates a unique name for a container or pod.
 func (r *Runtime) generateName() (string, error) {
        for {
-               name := namesgenerator.GetRandomName(0)
+               name := namesgenerator.GetRandomName(1)
                // Make sure container with this name does not exist
                if _, err := r.state.LookupContainer(name); err == nil {
                        continue

@mheon
Copy link
Member

mheon commented Sep 24, 2021

I can see how this might be a problem if the podman run commands were running in parallel, but in series this should never be a problem. Podman will check if the name is in use and automatically find a new one if it is; there is no potential that we run out unless the total number of adjective-name combinations is exhausted, and AFAIK that's in the tens of thousands. Are you certain that's the actual reproducer, and you're not doing something that is running podman run in parallel?

@ericcurtin
Copy link
Contributor Author

I may have run that loop with an ampersand at the end of the docker run statement.

@Luap99
Copy link
Member

Luap99 commented Sep 27, 2021

Yes for i in $(seq 1 100); do podman run -d alpine echo & done reproduces for me.
@mheon Should we look the runtime before we get the random name and only unlock after we wrote the container to the db?

@mheon
Copy link
Member

mheon commented Sep 27, 2021 via email

@vrothberg
Copy link
Member

I am not too worried about the lock. Container creation may involve pulling an image and certainly mounting the image where we're touching various lock files many times.

@Luap99 time to tackle it?

@mheon
Copy link
Member

mheon commented Sep 27, 2021 via email

@vrothberg
Copy link
Member

Maybe we should add name retry logic later in the process, if c/storage
says the name is taken when we try to create the container there.

That is a good idea. In any case, these checks will performed so we'd spare one lookup etc.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan rhatdan added Good First Issue This issue would be a good issue for a first time contributor to undertake. and removed stale-issue labels Oct 28, 2021
@rhatdan
Copy link
Member

rhatdan commented Oct 28, 2021

I don't believe anyone has looked into this yet. Anyone interested in fixing this problem?

@vrothberg
Copy link
Member

Yes, I have some cycles. Thank you for the reminder!

@vrothberg vrothberg self-assigned this Oct 28, 2021
@vrothberg vrothberg added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Oct 28, 2021
vrothberg added a commit to vrothberg/libpod that referenced this issue Nov 8, 2021
Address the TOCTOU when generating random names by having at most 10
attempts to assign a random name when creating a pod or container.

[NO TESTS NEEDED] since I do not know a way to force a conflict with
randomly generated names in a reasonable time frame.

Fixes: containers#11735
Signed-off-by: Valentin Rothberg <[email protected]>
mheon pushed a commit to mheon/libpod that referenced this issue Nov 12, 2021
Address the TOCTOU when generating random names by having at most 10
attempts to assign a random name when creating a pod or container.

[NO TESTS NEEDED] since I do not know a way to force a conflict with
randomly generated names in a reasonable time frame.

Fixes: containers#11735
Signed-off-by: Valentin Rothberg <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants