Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman does not detect volume from the volume plugin, unlike docker #14207

Closed
mangeshpanche opened this issue May 11, 2022 · 5 comments · Fixed by #14713
Closed

Podman does not detect volume from the volume plugin, unlike docker #14207

mangeshpanche opened this issue May 11, 2022 · 5 comments · Fixed by #14713
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@mangeshpanche
Copy link

/kind bug

Description
If volume is created out of band in the volume plugin, the volume is not detected by podman, unlike docker. This affects deployment of podman in clustered environment where the volume is created on shared storage from one node, is not detected by podman running on the other node.

Steps to reproduce the issue:

  1. Create volume using custom volume driver on shared storage.

  2. The podman running on the other node, does not detect the volume, using same custom volume driver.

Describe the results you received:
Podman can detect only volumes created locally using podman create.

Describe the results you expected:
Podman should detect all the volumes seen by the volume plugin driver.

Additional information you deem important (e.g. issue happens only occasionally):
In case of docker, all the volumes seen by the volume plugin driver reflected inside "docker volume ls".

Output of podman version:

# podman version
Client:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.17.7

Built:      Tue Mar 15 12:15:06 2022
OS/Arch:    linux/amd64

Output of podman info --debug:

# podman info --debug
host:
  arch: amd64
  buildahVersion: 1.24.1
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.29-1.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: ae467a0c8001179d4d0adf4ada381108a893d7ec'
  cpus: 8
  distribution:
    distribution: '"rhel"'
    version: "8.4"
  eventLogger: journald
  hostname: eagappflx038
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-305.40.2.el8_4.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 49152798720
  memTotal: 66800738304
  networkBackend: cni
  ociRuntime:
    name: runc
    package: runc-1.0.0-74.rc95.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/runc
    version: |-
      runc version spec: 1.0.2-dev
      go: go1.15.13
      libseccomp: 2.5.1
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /etc/opt/veritas/flex/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /bin/slirp4netns
    package: slirp4netns-1.1.8-1.module+el8.4.0+11822+6cc1e7d7.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.1
  swapFree: 68708425728
  swapTotal: 68719472640
  uptime: 140h 12m 38.68s (Approximately 5.83 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
  - filevol
  - veritas
registries:
  docker.io:
    Blocked: true
    Insecure: false
    Location: docker.io
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: docker.io
  registry.access.redhat.com:
    Blocked: true
    Insecure: false
    Location: registry.access.redhat.com
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: registry.access.redhat.com
  search:
  - console:8443
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 10
    paused: 0
    running: 8
    stopped: 2
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 16
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.0.2
  Built: 1647371706
  BuiltTime: Tue Mar 15 12:15:06 2022
  GitCommit: ""
  GoVersion: go1.17.7
  OsArch: linux/amd64
  Version: 4.0.2

Package info (e.g. output of rpm -q podman or apt list podman):

# rpm -q podman
podman-4.0.2-1.module_el8.7.0+1106+45480ee0.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):
Physical

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label May 11, 2022
@Luap99
Copy link
Member

Luap99 commented May 12, 2022

@mheon PTAL

@mheon
Copy link
Member

mheon commented May 12, 2022

This was a deliberate choice on our part. We use the Podman database as our single source of truth on what volumes exist on the system at present, and where. The main reason for this is a simple question: what happens if I have two volume plugins, and in each of them I out-of-band create a volume named "testvol". We can only have one volume with the same name, so what do we do? Displaying only the first one we encounter means that a podman volume rm testvol doesn't actually remove it - there's another. Do we try and prefix the name with the volume driver - if so, what happens to existing containers that want to mount "testvol" which technically no longer exists, replaced with "plugin1/testvol" or similar.

There are a hundred small questions like this. All of them make sourcing volume definitions from outside of Podman complicated - not impossible, but a very significant amount of work. We can potentially revisit this decision in the future but it will require a significant time and code investment to adequately handle the edge cases it introduces.

@mangeshpanche
Copy link
Author

Thanks @mheon for the reply.

Docker supports this configuration, where in volume available in the plugin are detected and listed. This is a must have requirement for deployment in the clustered environment. Not supporting this use case would make it difficult for migration from docker to podman in those deployments.

Presence of same volume name in two different plugin can be an invalid configuration or a corner case and the behavior can be defined and handled accordingly. Even in case of docker the behavior is indeterministic, if two drivers report the same volume.

The following options can be considered:

  1. If the same volume is reported by two plugins, the remove of the volume has to be done by specifying the driver.
  2. A mechanism to discover the volumes from the plugin via podman volume refresh/reload
  3. A option to bypass the database for a plugin. The volume create/delete for this plugin will be sent to the driver directly.

@mangeshpanche
Copy link
Author

Is there any update on this?

@mheon
Copy link
Member

mheon commented Jun 2, 2022

No. If you'd like this escalated and have a RHEL support contract, I suggest opening an RFE bugzilla to request that this work be prioritized.

@Luap99 Luap99 self-assigned this Jun 22, 2022
Luap99 added a commit to Luap99/libpod that referenced this issue Jun 23, 2022
Libpod requires that all volumes are stored in the libpod db. Because
volume plugins can be created outside of podman, it will not show all
available plugins. This podman volume reload command allows users to
sync the libpod db with their external volume plugins. All new volumes
from the plugin are also created in the libpod db and when a volume from
the db no longer exists it will be removed if possible.

There are some problems:
- naming conflicts, in this case we only use the first volume we found.
  This is not deterministic.
- race conditions, we have no control over the volume plugins. It is
  possible that the volumes changed while we run this command.

Fixes containers#14207

Signed-off-by: Paul Holzinger <[email protected]>
Luap99 added a commit to Luap99/libpod that referenced this issue Jul 7, 2022
Libpod requires that all volumes are stored in the libpod db. Because
volume plugins can be created outside of podman, it will not show all
available plugins. This podman volume reload command allows users to
sync the libpod db with their external volume plugins. All new volumes
from the plugin are also created in the libpod db and when a volume from
the db no longer exists it will be removed if possible.

There are some problems:
- naming conflicts, in this case we only use the first volume we found.
  This is not deterministic.
- race conditions, we have no control over the volume plugins. It is
  possible that the volumes changed while we run this command.

Fixes containers#14207

Signed-off-by: Paul Holzinger <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants