Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: podman volume prune with != as filter does not work #17051

Closed
jschintag opened this issue Jan 10, 2023 · 7 comments · Fixed by #17483
Closed

[Bug]: podman volume prune with != as filter does not work #17051

jschintag opened this issue Jan 10, 2023 · 7 comments · Fixed by #17483
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@jschintag
Copy link

Issue Description

Using podman volume prune with a filter excluding specific volumes (e.g. label!= or name!=) fails.

Steps to reproduce the issue

Steps to reproduce the issue

  1. podman volume create --label testlabel testvolume
  2. podman volume prune --filter="label!=testlabel"

Describe the results you received

[jschinta@m8345043 ~]$ podman volume create --label testlabel testvolume
testvolume

[jschinta@m8345043 ~]$ podman volume prune --filter="label!=testlabel"
WARNING! This will remove all volumes not used by at least one container. The following volumes will be removed:
Error: "label!" is an invalid volume filter

[jschinta@m8345043 ~]$ podman volume prune --filter="name!=testvolume"
WARNING! This will remove all volumes not used by at least one container. The following volumes will be removed:
Error: "name!" is an invalid volume filter

Describe the results you expected

All volumes except testvolume would be pruned (none in this case)

podman info output

$ podman info
host:
  arch: s390x
  buildahVersion: 1.28.0
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.5-1.fc37.s390x
    path: /usr/bin/conmon
    version: 'conmon version 2.1.5, commit: '
  cpuUtilization:
    idlePercent: 99.42
    systemPercent: 0.35
    userPercent: 0.23
  cpus: 4
  distribution:
    distribution: fedora
    version: "37"
  eventLogger: journald
  hostname: m8345043
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.2.0-20230109.rc3.git7.cda2413e0b30.300.fc37.s390x
  linkmode: dynamic
  logDriver: journald
  memFree: 15179411456
  memTotal: 15684050944
  networkBackend: cni
  ociRuntime:
    name: crun
    package: crun-1.7.2-3.fc37.s390x
    path: /usr/bin/crun
    version: |-
      crun version 1.7.2
      commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-8.fc37.s390x
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 30744256512
  swapTotal: 30744256512
  uptime: 0h 4m 55.00s
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/jschinta/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/jschinta/.local/share/containers/storage
  graphRootAllocated: 106820345856
  graphRootUsed: 47633633280
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 3
  runRoot: /run/user/1000/containers
  volumePath: /home/jschinta/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 1668178834
  BuiltTime: Fri Nov 11 16:00:34 2022
  GitCommit: ""
  GoVersion: go1.19.2
  Os: linux
  OsArch: linux/s390x
  Version: 4.3.1


### Podman in a container

No

### Privileged Or Rootless

Rootless

### Upstream Latest Release

Yes

### Additional environment details

Additional environment details

### Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting
@jschintag jschintag added the kind/bug Categorizes issue or PR as related to a bug. label Jan 10, 2023
@mheon
Copy link
Member

mheon commented Jan 10, 2023

We've been adding != support bit by bit, so these probably haven't been hit yet. It is probably worth just going through all filter code and ensuring all filters support !=.

@rhatdan
Copy link
Member

rhatdan commented Jan 10, 2023

@jschintag Interested in opening a PR to implement this?

@jschintag
Copy link
Author

@jschintag Interested in opening a PR to implement this?

Unfortunately, I haven't worked on podman before and I don't have time at the moment to get familiar with a new project.

jschintag added a commit to jschintag/fedora-coreos-pipeline that referenced this issue Jan 24, 2023
Add a script to initialize secex-data volume during installation.
This is achieved by having the tarball stored on a second disk.

Also run a podman container that mounts the volume to keep it from beeing
pruned. See: containers/podman#17051

Signed-off-by: Jan Schintag <[email protected]>
jschintag added a commit to jschintag/fedora-coreos-pipeline that referenced this issue Jan 24, 2023
Add a script to initialize secex-data volume during installation.
This is achieved by having the tarball stored on a second disk.

Also run a podman container that mounts the volume to keep it from beeing
pruned. See: containers/podman#17051

Signed-off-by: Jan Schintag <[email protected]>
@vyasgun
Copy link
Member

vyasgun commented Feb 2, 2023

I can pick it up

/assign

@rhatdan
Copy link
Member

rhatdan commented Feb 3, 2023

Great thanks.

jschintag added a commit to jschintag/fedora-coreos-pipeline that referenced this issue Feb 7, 2023
Add a script to initialize secex-data volume during installation.
This is achieved by having the tarball stored on a second disk.

Also run a podman container that mounts the volume to keep it from beeing
pruned. See: containers/podman#17051

Signed-off-by: Jan Schintag <[email protected]>
jschintag added a commit to jschintag/fedora-coreos-pipeline that referenced this issue Feb 7, 2023
Add a script to initialize secex-data volume during installation.
This is achieved by having the tarball stored on a second disk.

Also run a podman container that mounts the volume to keep it from beeing
pruned. See: containers/podman#17051

Signed-off-by: Jan Schintag <[email protected]>
jschintag added a commit to jschintag/fedora-coreos-pipeline that referenced this issue Feb 27, 2023
Add a script to initialize secex-data volume during installation.
This is achieved by having the tarball stored on a second disk.

Also run a podman container that mounts the volume to keep it from beeing
pruned. See: containers/podman#17051

Signed-off-by: Jan Schintag <[email protected]>
@dustymabe
Copy link
Contributor

Should this be closed now that #17483 merged?

@flouthoc
Copy link
Collaborator

Closing this.

jschintag added a commit to jschintag/fedora-coreos-pipeline that referenced this issue Feb 28, 2023
Add a script to initialize secex-data volume during installation.
This is achieved by having the tarball stored on a second disk.

Also run a podman container that mounts the volume to keep it from beeing
pruned. See: containers/podman#17051

Signed-off-by: Jan Schintag <[email protected]>
jschintag added a commit to jschintag/fedora-coreos-pipeline that referenced this issue Feb 28, 2023
Add a script to initialize secex-data volume during installation.
This is achieved by having the tarball stored on a second disk.

Also run a podman container that mounts the volume to keep it from being
pruned. See: containers/podman#17051

Signed-off-by: Jan Schintag <[email protected]>
jschintag added a commit to jschintag/fedora-coreos-pipeline that referenced this issue Mar 1, 2023
Add a script to initialize secex-data volume during installation.
This is achieved by having the tarball stored on a second disk.

Also run a podman container that mounts the volume to keep it from being
pruned. See: containers/podman#17051

Signed-off-by: Jan Schintag <[email protected]>
dustymabe pushed a commit to coreos/fedora-coreos-pipeline that referenced this issue Mar 1, 2023
Add a script to initialize secex-data volume during installation.
This is achieved by having the tarball stored on a second disk.

Also run a podman container that mounts the volume to keep it from being
pruned. See: containers/podman#17051

Signed-off-by: Jan Schintag <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Aug 31, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 31, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants