Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman play kube: duplicate volume names lead to stalled process #5654

Closed
jayaddison opened this issue Mar 29, 2020 · 5 comments
Closed

podman play kube: duplicate volume names lead to stalled process #5654

jayaddison opened this issue Mar 29, 2020 · 5 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@jayaddison
Copy link

/kind bug

Description
While working from a git checkout of https://github.com/jayaddison/grocy-docker/tree/b78b1810ff159e583a8770694305e2e5472ddb76, the podman play kube grocy.yaml results in a blocked-process condition.

Steps to reproduce the issue:

# 1. Produce grocy-app:libpod-issue-5631 and grocy-nginx:libpod-issue-5631 images
make build

# 2. Clear state
podman pod stop -a; podman container rm -a ; podman volume rm -a ; podman pod rm -a

# 3. Attempt playthrough (with system-call tracing via strace)
strace podman play kube grocy.yaml

Describe the results you received:
The podman process stalls without producing any output to stdout.

Describe the results you expected:
The podman process should complete after the successful creation of a pod (named grocy) with two containers (named grocy-app and grocy-nginx).

Additional information you deem important (e.g. issue happens only occasionally):
The stall-point doesn't seem consistent between playthroughs; this can be observed by running the repro steps multiple times and comparing the system call information output produced by strace during each run.

A workaround is available: by providing unique distinguishing name values for the three mounts (example), the problem is avoided and the kubectl play kube step succeeds.

Output of podman version:

podman version 1.8.2

Output of podman info:


host:
  BuildahVersion: 1.14.3
  CgroupVersion: v1
  Conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.14, commit: '
  Distribution:
    distribution: ubuntu
    version: "19.10"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 10000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 10000
      size: 65536
  MemFree: 658882560
  MemTotal: 3935461376
  OCIRuntime:
    name: runc
    package: 'runc: /usr/sbin/runc'
    path: /usr/sbin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 2
  eventlogger: journald
  hostname: ubuntu
  kernel: 5.3.0-42-generic
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: 'slirp4netns: /usr/bin/slirp4netns'
    Version: |-
      slirp4netns version 0.4.3
      commit: unknown
  uptime: 3h 6m 58.86s (Approximately 0.12 days)
registries:
  search:
  - docker.io
store:
  ConfigFile: /home/jay/.config/containers/storage.conf
  ContainerStore:
    number: 7
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: 'fuse-overlayfs: /usr/bin/fuse-overlayfs'
      Version: |-
        fusermount3 version: 3.4.1
        fuse-overlayfs: version 0.7.6
        FUSE library version 3.4.1
        using FUSE kernel interface version 7.27
  GraphRoot: /home/jay/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: ecryptfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 7
  RunRoot: /run/user/1000
  VolumePath: /home/jay/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

$ apt list podman
Listing... Done
podman/unknown,now 1.8.2~1 amd64 [installed]
podman/unknown 1.8.2~1 arm64
podman/unknown 1.8.2~1 armhf
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 29, 2020
@mheon
Copy link
Member

mheon commented Mar 30, 2020

My initial inclination would be that passing volumes with duplicate names sounds like a mistake on the user's part, so we should trap it early and inform the user that they need to fix their YAML. Or is this something that Kubernetes allows, so we'll have to handle?

@jayaddison
Copy link
Author

@mheon Although this is a slightly indirect answer, I do currently believe that mounting a single named-volume at multiple volume mounts is supported by Kubernetes.

This is my initial understanding after having discovered current Kubernetes documentation that explains the subPath option on volume mounts. That documentation includes an example illustrating a single named-volume (site-data) mounted at two different locations within a pod.

Less strongly, but also in support of this theory, there is an absence of any documented restrictions on uniqueness of the name field in the Kubernetes VolumeMount documentation (unlike the Volume name, which according to the docs must be unique within a pod).

These lead me to believe that it's valid for a single volume name to be valid across multiple entries within volumeMounts, for the purpose of accessing volume contents via multiple filesystem paths.

@mheon
Copy link
Member

mheon commented Mar 30, 2020

This isn't a volume, though, but a volumeMount - they're very different, AFAIK (volumes are fully managed, volumeMount is a simple bind mount from host to container). I agree that volumes should be allowed to be mounted at multiple points; I'm less convinced about volume mounts.

Hmmm... @haircommander Looking at the full description, Name in a volume mount only points to volumes, which we don't implement at present. Should we be restricting its use for now?

@haircommander
Copy link
Collaborator

@mheon to answer your thought, in kubernetes, volumes are a generic structure of "here's some data to mount into the container". volumeMount is the interface for a user to specify a volume (specified by name) should be mounted into a container. I.e. they reference the same thing, but from a different perspective. The name field needs to match from volumeMount to Volume, which tells the manager which volume points to which container path.

I don't see any reason why we couldn't mount a hostPath to multiple spots in the container, but I'm not convinced k8s does this. subPath seems to provide uniqueness, and seems like a different use case than mounting one hostPath to multiple container points.

@jayaddison
Copy link
Author

While I'd like to revisit the kube functionality in podman in future (both generation-of and play-from kube yaml), I've opted to skip this for now and use the 'native' podman management commands to perform volume management and sharing instead. From my point of view this issue is fine to resolve as wontfix; apologies for any distraction caused.

@rhatdan rhatdan closed this as completed Mar 31, 2020
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

5 participants