Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container name permanently in use after infra container renamed then pod removed #11750

Closed
DigitalDJ opened this issue Sep 27, 2021 · 6 comments · Fixed by #11774
Closed

Container name permanently in use after infra container renamed then pod removed #11750

DigitalDJ opened this issue Sep 27, 2021 · 6 comments · Fixed by #11774
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@DigitalDJ
Copy link

DigitalDJ commented Sep 27, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

If you rename the infrastructure container within a pod, then remove the pod using podman pod rm <pod> the name of the infrastructure container is permanently unavailable for use. Inspecting bolt_state.db shows that a dangling entry exists in name-registry.

Steps to reproduce the issue:

  1. podman pod create --name test_pod

  2. podman rename $(podman container ls -aq --filter pod=test_pod) test_pod-infra

  3. podman pod rm test_pod

  4. podman run -d --name test_pod-infra pause

Describe the results you received:

$ podman run -d --name test_pod-infra pause
Error: name "test_pod-infra" is in use: pod already exists

$ podman ps -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

$ podman pod ls
POD ID      NAME        STATUS      CREATED     INFRA ID    # OF CONTAINERS

$ podman container ls -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

Describe the results you expected:
A container or pod with the name test_pod-infra is created.

Additional information you deem important (e.g. issue happens only occasionally):
Seems to be reproducible with the steps above

Output of podman version:

Version:      3.3.1
API Version:  3.3.1
Go Version:   go1.16.6
Built:        Thu Jan  1 00:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.22.3
  cgroupControllers: []
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.27, commit: '
  cpus: 4
  distribution:
    distribution: ubuntu
    version: "21.04"
  eventLogger: journald
  hostname: pods
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.11.0-34-generic
  linkmode: dynamic
  memFree: 4265857024
  memTotal: 8305061888
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.20.1.5-925d-dirty
      commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.1.8
      commit: unknown
      libslirp: 4.3.1-git
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.1
  swapFree: 4294963200
  swapTotal: 4294963200
  uptime: 38h 18m 36s (Approximately 1.58 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /home/x/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/x/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 3
  runRoot: /run/user/1000/containers
  volumePath: /home/x/.local/share/containers/storage/volumes
version:
  APIVersion: 3.3.1
  Built: 0
  BuiltTime: Thu Jan  1 00:00:00 1970
  GitCommit: ""
  GoVersion: go1.16.6
  OsArch: linux/amd64
  Version: 3.3.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman/unknown,now 100:3.3.1-1 amd64 [residual-config]

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

VMware Workstation - Ubuntu 21.04 - Kubic packages - rootless

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 27, 2021
@DigitalDJ DigitalDJ changed the title Container name permanently in use after infra cont rename then pod rm --force Container name permanently in use after infra container renamed then pod forcefully removed Sep 27, 2021
@DigitalDJ DigitalDJ changed the title Container name permanently in use after infra container renamed then pod forcefully removed Container name permanently in use after infra container renamed then pod removed Sep 27, 2021
@Luap99
Copy link
Member

Luap99 commented Sep 27, 2021

@mheon PTAL

mheon added a commit to mheon/libpod that referenced this issue Sep 28, 2021
As we were not updating the pod ID bucket, removing a pod with
containers still in it (including the infra container, which will
always suffer from this) will not properly update the name
registry to remove the name of any renamed containers. This
patch ensures that does not happen - all containers will be fully
removed, even if renamed.

Fixes containers#11750

Signed-off-by: Matthew Heon <[email protected]>
@mheon
Copy link
Member

mheon commented Sep 28, 2021

#11774 to fix

mheon added a commit to mheon/libpod that referenced this issue Sep 29, 2021
As we were not updating the pod ID bucket, removing a pod with
containers still in it (including the infra container, which will
always suffer from this) will not properly update the name
registry to remove the name of any renamed containers. This
patch ensures that does not happen - all containers will be fully
removed, even if renamed.

Fixes containers#11750

Signed-off-by: Matthew Heon <[email protected]>
@Nuc1eoN
Copy link

Nuc1eoN commented Feb 15, 2022

How do I go about removing such a container now that it happened?

EDIT: Also I am on AlmaLinux, so I am not sure how to make use of the latest podman versions which have this fix, going forward? I have a setup were I am exposed to this issue.

UPDATE: Got it all figued out!

@rhatdan
Copy link
Member

rhatdan commented Feb 15, 2022

podman system reset, which will bring you back to init state, remove all containers and images.

@Nuc1eoN
Copy link

Nuc1eoN commented Feb 16, 2022

podman system reset, which will bring you back to init state, remove all containers and images.

Sure, thank you. But that is not variable when one is actively using many containers in production already, is it?

@mheon
Copy link
Member

mheon commented Feb 16, 2022

I'll take a look when I have time (maybe tomorrow?) to see if there's a way to remove names that experienced this. However, barring manual database editing with a hex editor (which we strongly do not recommend), I don't see an easy path forward aside from removing the database (and thus all records of existing containers).

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants