Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix system prune cmd user message with options #9775

Merged

Conversation

jmguzik
Copy link
Contributor

@jmguzik jmguzik commented Mar 21, 2021

[NO TESTS NEEDED]

Signed-off-by: Jakub Guzik [email protected]

/kind bug

  1. Podman before fix (the same message):
$ podman system prune 

WARNING! This will remove:
        - all stopped containers
        - all stopped pods
        - all dangling images
        - all build cache
Are you sure you want to continue? [y/N] ^C
$ podman system prune -a

WARNING! This will remove:
        - all stopped containers
        - all stopped pods
        - all dangling images
        - all build cache
Are you sure you want to continue? [y/N] ^C
$ podman system prune -a --volumes 

WARNING! This will remove:
        - all stopped containers
        - all volumes not used by at least one container
        - all stopped pods
        - all dangling images
        - all build cache
Are you sure you want to continue? [y/N] ^C
$ podman system prune --volumes 

WARNING! This will remove:
        - all stopped containers
        - all volumes not used by at least one container
        - all stopped pods
        - all dangling images
        - all build cache
Are you sure you want to continue? [y/N] 

  1. Docker:
$ docker system prune
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all dangling images
  - all dangling build cache

Are you sure you want to continue? [y/N] ^C
$ docker system prune -a
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all images without at least one container associated to them
  - all build cache

Are you sure you want to continue? [y/N] ^C
$ docker system prune -a --volumes 
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all volumes not used by at least one container
  - all images without at least one container associated to them
  - all build cache

Are you sure you want to continue? [y/N] ^C
$ docker system prune --volumes 
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all volumes not used by at least one container
  - all dangling images
  - all dangling build cache

Are you sure you want to continue? [y/N] ^C
  1. podman after fix:
$ bin/podman system prune 
WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all dangling build cache

Are you sure you want to continue? [y/N] ^C
$ bin/podman system prune -a
WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all images without at least one container associated to them
        - all build cache

Are you sure you want to continue? [y/N] ^C
$ bin/podman system prune -a --volumes 
WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all volumes not used by at least one container
        - all images without at least one container associated to them
        - all build cache

Are you sure you want to continue? [y/N] ^C
$ bin/podman system prune  --volumes   
WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all volumes not used by at least one container
        - all dangling images
        - all dangling build cache

Are you sure you want to continue? [y/N] ^C

I kept original tabs in podman, but the message content is now the same as in docker.

@jmguzik jmguzik force-pushed the system-prune-msg-fix branch 2 times, most recently from 82e5761 to eaaf245 Compare March 21, 2021 08:22
cmd/podman/system/prune.go Outdated Show resolved Hide resolved
@jmguzik jmguzik force-pushed the system-prune-msg-fix branch from eaaf245 to 1dfbdd5 Compare March 21, 2021 17:03
Copy link
Member

@Luap99 Luap99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@openshift-ci-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jmguzik, Luap99

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 21, 2021
@jwhonce
Copy link
Member

jwhonce commented Mar 22, 2021

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Mar 22, 2021
@openshift-merge-robot openshift-merge-robot merged commit 2cd37ed into containers:master Mar 22, 2021
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants