Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add --time option for podman * rm -f flag #11763

Merged
merged 1 commit into from
Oct 4, 2021

Conversation

rhatdan
Copy link
Member

@rhatdan rhatdan commented Sep 28, 2021

Add --time options to

podman container rm
podman pod rm
podman volume rm
podman network rm

Signed-off-by: Daniel J Walsh [email protected]

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 28, 2021
@rhatdan
Copy link
Member Author

rhatdan commented Sep 28, 2021

Might make @edsantiago happy in #11762

@rhatdan rhatdan force-pushed the timeout branch 2 times, most recently from 571823e to ba902af Compare September 28, 2021 01:37
Copy link
Member

@vrothberg vrothberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than the mini nits, LGTM

cmd/podman/containers/rm.go Outdated Show resolved Hide resolved
@@ -159,7 +159,7 @@ var _ = Describe("Podman checkpoint", func() {
Expect(result).Should(Exit(2))
Expect(podmanTest.NumberOfContainersRunning()).To(Equal(1))

result = podmanTest.Podman([]string{"rm", "-f", cid})
result = podmanTest.Podman([]string{"rm", "-t", 0, "-f", cid})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0 -> "0" in all _test.go files.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is why I shouldn't code late at night.

if options.Force && options.Timeout != nil {
// Ignore errors, ic.removeContainer will catch them
parallelctr.ContainerOp(ctx, ctrs, func(c *libpod.Container) error {
return c.StopWithTimeout(*options.Timeout)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be moved inside the other ContainerOp function below. I don't think it is necessary to stop all container before we start to remove them.
Also should we really ignore all errors?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved

@rhatdan
Copy link
Member Author

rhatdan commented Sep 28, 2021

@Luap99 good idea.
Updated and hopefully fixed tests.

@edsantiago edsantiago changed the title Add --time out for podman rm -f flag Add --time option for podman rm -f flag Sep 28, 2021
@vrothberg
Copy link
Member

@rhatdan needs a rebase

@edsantiago
Copy link
Member

Yeah, my fault with the bats-cleanup PR. Sorry! And, thank you!

@@ -193,6 +193,10 @@ func (ic *ContainerEngine) ContainerRm(ctx context.Context, namesOrIds []string,
}
reports := make([]*entities.RmReport, 0, len(ctrs))
for _, c := range ctrs {
if opts.Force && opts.Timeout != nil {
// Ignore errors, containers.Remove will catch them
containers.Stop(ic.ClientCtx, c.ID, &containers.StopOptions{Timeout: opts.Timeout})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we doing this? Remove already stops containers if force is specified?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This just strikes me as potentially buggy - we ignore all errors that could occur with stopping the container... Seems like it would be a lot safer to add a Timeout arg to the libpod RemoveContainer function?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is some nasty plumbing to do. We will still try to stop the container within the remove stage if it is not stopped, so I don't believe there is any risk.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This allows us to stop them first without a timeout of 0.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be removed now I think

@rhatdan rhatdan force-pushed the timeout branch 2 times, most recently from bbaa835 to 3c7017e Compare September 28, 2021 13:57
@edsantiago
Copy link
Member

This CI error actually looks real:

  podman checkpoint container with export (migration)
  ...
 Running: podman [options] ps --all --format={{.Status}}
 Up 2 seconds ago
 Up 1 second ago
 Running: podman [options] rm -t 0 -fa
         7b425b2420aceccdd7fcafbe9a5ffcc3c48c37257f8df53cbe958e440a1ca40b
         Error: container e6f1e20fd52ba325a469ca432d36a25f65a016115a44d687c61cefbcd0961fb0 does not exist in database: no such container

@rhatdan rhatdan changed the title Add --time option for podman rm -f flag Add --time option for podman * rm -f flag Sep 28, 2021
@rhatdan
Copy link
Member Author

rhatdan commented Sep 28, 2021

@mheon I expanded the PR to handle, pod, network and volume, and fully plumbed this into libpod.

@@ -43,6 +45,10 @@ func init() {
flags := rmCommand.Flags()
flags.BoolVarP(&rmOptions.All, "all", "a", false, "Remove all volumes")
flags.BoolVarP(&rmOptions.Force, "force", "f", false, "Remove a volume by force, even if it is being used by a container")
timeFlagName := "time"
flags.UintVarP(&stopTimeout, timeFlagName, "t", containerConfig.Engine.StopTimeout, "Seconds to wait for pod stop before killing the container")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this message right? I'm not sure of it or my suggestion...

Suggested change
flags.UintVarP(&stopTimeout, timeFlagName, "t", containerConfig.Engine.StopTimeout, "Seconds to wait for pod stop before killing the container")
flags.UintVarP(&stopTimeout, timeFlagName, "t", containerConfig.Engine.StopTimeout, "Seconds to wait for the container to stop before removing the volume")

@@ -15,6 +15,10 @@ Delete one or more Podman networks.
The `force` option will remove all containers that use the named network. If the container is
running, the container will be stopped and removed.

#### **--time**, **-t**=*time*

Time to wait before forcibly stopping the container. The --force option must be specified to use the --time option.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

container or network?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or maybe "the container that the network is attached to."

@@ -33,6 +33,10 @@ Stop running containers and delete all stopped containers before removal of pod.

Read pod ID from the specified file and remove the pod. Can be specified multiple times.

#### **--time**, **-t**=*time*

Time to wait before forcibly stopping the container. The --force option must be specified to use the --time option.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pod or container?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or "containers within the pod".

@@ -28,6 +28,9 @@ If it is being used by containers, the containers will be removed first.

Print usage statement

#### **--time**, **-t**=*time*

Time to wait before forcibly stopping the container. The --force option must be specified to use the --time option.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto here if you take similar comments.

@@ -114,4 +114,4 @@ podman pod kill foobar
########
# Remove all pods and their containers
########
podman pod rm -fa
podman pod rm -t 0 -fa
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I missed it, but I don't recall seeing a -t value for anything other than zero. It would be nice to have at least a small handful with values other than zero.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some.

@rhatdan rhatdan force-pushed the timeout branch 6 times, most recently from b2e7372 to b7df5f9 Compare September 30, 2021 18:47
@rhatdan rhatdan force-pushed the timeout branch 3 times, most recently from 9571746 to 6fe5496 Compare October 1, 2021 18:48
Copy link
Member

@Luap99 Luap99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is still remote wiring to do, pkg/domain/infra/tunnel does not set the timeout option for the network/volume rm bindings.

@@ -15,6 +15,10 @@ Delete one or more Podman networks.
The `force` option will remove all containers that use the named network. If the container is
running, the container will be stopped and removed.

#### **--time**, **-t**=*time*
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#### **--time**, **-t**=*time*
#### **--time**, **-t**=*seconds*

Please change this to seconds everywhere. A user has no idea what time means.

@@ -193,6 +193,10 @@ func (ic *ContainerEngine) ContainerRm(ctx context.Context, namesOrIds []string,
}
reports := make([]*entities.RmReport, 0, len(ctrs))
for _, c := range ctrs {
if opts.Force && opts.Timeout != nil {
// Ignore errors, containers.Remove will catch them
containers.Stop(ic.ClientCtx, c.ID, &containers.StopOptions{Timeout: opts.Timeout})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be removed now I think

Add --time flag to podman container rm
Add --time flag to podman pod rm
Add --time flag to podman volume rm
Add --time flag to podman network rm

Signed-off-by: Daniel J Walsh <[email protected]>
Copy link
Member

@Luap99 Luap99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 4, 2021

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Luap99, rhatdan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Luap99
Copy link
Member

Luap99 commented Oct 4, 2021

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Oct 4, 2021
@openshift-merge-robot openshift-merge-robot merged commit a866a2f into containers:main Oct 4, 2021
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants