Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with volumes when recreating a pod #13715

Closed
vanyasvl opened this issue Mar 30, 2022 · 1 comment · Fixed by #13732
Closed

Issue with volumes when recreating a pod #13715

vanyasvl opened this issue Mar 30, 2022 · 1 comment · Fixed by #13732
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@vanyasvl
Copy link

vanyasvl commented Mar 30, 2022

/kind bug

Description

When you creating a pod from kube yaml file with ConfigMap volume, you can't recreate (--replace) a pod

Error: cannot create a local volume for volume from configmap "test_cm": volume with name test_volume already exists: volume already exists

Steps to reproduce the issue:

  1. Create and pod and configmap yamls
apiVersion: v1
kind: Pod
metadata:
  name: irp-rest
spec:
  containers:
    - name: test
      image: bebian:bullsyey
      volumeMounts:
        - name: test_volume
          mountPath: /test
          readOnly: true
  volumes:
    - name: test_volume
      configMap:
        name: test_cm
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: test_cm
data:
  file: text
  1. Create a pod
podman play kube --configmap test_cm.yaml test_pod.yaml
  1. Make any changes in yaml and try to recteate a pod
podman play kube --replace --configmap test_cm.yaml test_pod.yaml

Describe the results you received:

Error: cannot create a local volume for volume from configmap "test_cm": volume with name test_volume already exists: volume already exists

Describe the results you expected:
I expect that pod will be recreated with configmap volume. I don't want to clean a volume with podman volime rm any time I'm recreating a pod from kube yaml.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Client:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.18

Built:      Wed Mar 30 11:06:28 2022
OS/Arch:    linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.24.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: 'conmon: /usr/bin/conmon'
    path: /usr/bin/conmon
    version: 'conmon version 2.0.25, commit: unknown'
  cpus: 8
  distribution:
    codename: jammy
    distribution: ubuntu
    version: "22.04"
  eventLogger: file
  hostname: app-sid.staging.irp.servers.im
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 2013
      size: 1
    - container_id: 1
      host_id: 624288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 2013
      size: 1
    - container_id: 1
      host_id: 624288
      size: 65536
  kernel: 5.15.0-23-generic
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 4244520960
  memTotal: 8336478208
  networkBackend: cni
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.17
      commit: 0e9229ae34caaebcb86f1fde18de3acaf18c6d9a
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/user/2013/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.0.1
      commit: 6a7b16babc95b6a3056b33fb45b74a6f62262dd4
      libslirp: 4.6.1
  swapFree: 0
  swapTotal: 0
  uptime: 4h 25m 21.23s (Approximately 0.17 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/sid/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/sid/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 2
  runRoot: /run/user/2013/containers
  volumePath: /home/sid/.local/share/containers/storage/volumes
version:
  APIVersion: 4.0.2
  Built: 1648638388
  BuiltTime: Wed Mar 30 11:06:28 2022
  GitCommit: ""
  GoVersion: go1.18
  OsArch: linux/amd64
  Version: 4.0.2

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 30, 2022
@flouthoc
Copy link
Collaborator

Hi @vanyasvl , thanks for creating the issue. Above PR should close this.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants