-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add podman rm --depend #12694
Add podman rm --depend #12694
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rhatdan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@mheon @vrothberg @giuseppe PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The performance looks good to me. We're first collecting all containers and can then partition them for parallel removal.
However, if we had to preserve the relative order and remove containers starting at the leafs, then it won't work as is.
I let @mheon weigh in
libpod/runtime_ctr.go
Outdated
@@ -915,6 +915,44 @@ func (r *Runtime) evictContainer(ctx context.Context, idOrName string, removeVol | |||
return id, cleanupErr | |||
} | |||
|
|||
// DependMap populates of map of all containers depending on the specified container |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// DependMap populates of map of all containers depending on the specified container | |
// DependMap populates a map of all containers depending on the specified container |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment seems unrelated to the actual function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
libpod/runtime_ctr.go
Outdated
// DependMap populates of map of all containers depending on the specified container | ||
func (r *Runtime) DependMap(ctx context.Context, ctr *Container, ctrMap map[string]*Container) error { | ||
r.lock.RLock() | ||
defer r.lock.RUnlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need to lock? I think we can read the data in any case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rewritten, and yes we will need the lock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just the runtime lock, we need to take the container lock (c.lock.Lock()
) instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, the lock needs to be in removeDepend()
- otherwise we don't take the lock when we recurse onto other containers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can probably avoid removeDepend
entirely given this, runtime lock is superfluous
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check the code again.
libpod/runtime_ctr.go
Outdated
} | ||
for _, cid := range ctr.Dependencies() { | ||
if cid == searchCtr.ID() { | ||
ctrMap[ctr.ID()] = ctr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we turn the map into map[string]string
?
Currently there is a bug since we're using the loop variable. ctr
will be added to the map but will change with the next iteration of the loop. That means all entries of ctr
will point to the last item in allCtrs
. Using a map[string]string
will avoid that problem AND it forces callers to look up the containers another time before removal - they may have already been removed in the meantime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rewritten.
@@ -138,6 +138,7 @@ type PruneOptions struct { | |||
//go:generate go run ../generator/generator.go RemoveOptions | |||
// RemoveOptions are optional options for removing containers | |||
type RemoveOptions struct { | |||
Depend *bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also needs changes for the handlers and swagger.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
@@ -56,6 +60,11 @@ Remove a container by its name *mywebserver* | |||
$ podman rm mywebserver | |||
``` | |||
|
|||
Remove a *mywebserver* container and all of the containers that depend on it | |||
``` | |||
$ podman rm --depend mywebserver |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this print every container that was removed? I got a little concerned because there are no changes to cmd/podman
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the container is added to the reports list when being removed in the API. The Podman client just walks the list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a test to verify.
@@ -36,6 +36,7 @@ func RemoveContainer(w http.ResponseWriter, r *http.Request) { | |||
query := struct { | |||
Force bool `schema:"force"` | |||
Ignore bool `schema:"ignore"` | |||
Depend bool `schema:"depend"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want this in the compat API, the return type won't make it clear that multiple containers were removed. Having it only in the fancy version that can already to multiple containers seems better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is only handled in libpod not in compat.
libpod/runtime_ctr.go
Outdated
@@ -915,6 +915,44 @@ func (r *Runtime) evictContainer(ctx context.Context, idOrName string, removeVol | |||
return id, cleanupErr | |||
} | |||
|
|||
// DependMap populates of map of all containers depending on the specified container | |||
func (r *Runtime) DependMap(ctx context.Context, ctr *Container, ctrMap map[string]*Container) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe change the signature to not accept a map[string]*Container
but to allocate it here and return it instead? I don't think many people will call DependMap on partially-generated maps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am passing in the map with the list of all the containers currently added in the podman run command. This prevents us from having to lookup containers that are currently being deleted.
libpod/runtime_ctr.go
Outdated
if ctrMap[ctr.ID()] != nil { | ||
continue | ||
} | ||
for _, cid := range ctr.Dependencies() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be safe to recurse onto the dependencies of the container, the DB has a built-in guarantee of no cycles.
@@ -357,6 +357,26 @@ func (ic *ContainerEngine) ContainerRm(ctx context.Context, namesOrIds []string, | |||
return reports, nil | |||
} | |||
|
|||
if !options.All && options.Depend { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ordering seems a potential concern. Dependencies can have dependencies, and the order in which we remove them has to depend on that - it's more graph traversal than for loop. We may be able to reuse some of the pod code? Pod start builds a graph and traverses it to start containers (unfortunately, in the wrong direction)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I rewrote this to handle everything in libpod directly.
libpod/runtime_ctr.go
Outdated
if ctrMap[ctr.ID()] != nil { | ||
continue | ||
} | ||
for _, cid := range ctr.Dependencies() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if a container depends on infra within a pod, will this remove infra but keep the pod in existence? Could bring up some weird issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'll just hard-fail because infra containers can't be removed except by removing their pod - so podman rm -f --depend infra_ctr
should fail without doing anything
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
./bin/podman rm --depend 35965a814e0b
Error: container 35965a814e0ba301bc5d19bd1acaa1809af443a09c4d3f9ebacc02a454bc72f3 is the infra container of pod f72b7a1abf8f76a69e4a859542103db4184bcd7d07b7afaf73608eb6db870a0e and cannot be removed without removing the pod
b0ddf65
to
72e3387
Compare
./bin/podman rm --depend 35965a814e0b |
19dc7bd
to
cb22727
Compare
@vrothberg @mheon @cdoern PTAL |
cmd/podman/root.go
Outdated
@@ -137,7 +137,7 @@ func persistentPreRunE(cmd *cobra.Command, args []string) error { | |||
runtimeFlag := cmd.Root().Flags().Lookup("runtime") | |||
if runtimeFlag == nil { | |||
return errors.Errorf( | |||
"Unexcpected error setting runtime to '%s' for restore", | |||
"Unexpected error setting runtime to '%s' for restore", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you un-capitalize this? Go errors should be lowercased
@@ -57,6 +58,7 @@ func RemoveContainer(w http.ResponseWriter, r *http.Request) { | |||
if utils.IsLibpodRequest(r) { | |||
options.Volumes = query.LibpodVolumes | |||
options.Timeout = query.Timeout | |||
options.Depend = query.Depend |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this handle returns properly? As in, if we remove >1 container, we tell the server that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it should return all of the containers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a test to verify.
d6854ca
to
bbca288
Compare
Changes LGTM, but you're still fighting with the CI @rhatdan |
Current failures are:
I've restarted twice already, no change. This is either some sort of dnf mirror flake that will resolve itself, or some sort of bug in our VM setup. For now I will assume the former. |
APIv2 tests are barfing because a DELETE operation, which should return 204, is now returning 200. |
da1f83f
to
6c0bd6f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
and dare I say?
Happy Green Test Buttons!
@mheon PTAL This turned out to be a lot more work then originally thought. |
LGTM |
test/apiv2/20-containers.at
Outdated
@@ -85,7 +85,7 @@ else | |||
fi | |||
fi | |||
|
|||
t DELETE libpod/containers/$cid 204 | |||
t DELETE libpod/containers/$cid 200 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All these 204->200 changes trouble me. @jwhonce could you please confirm whether or not these are OK?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the breaking API. 200 means no data is returned 204 Means data is returned, which is new, As I understand it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other way around: 200 returns data, 204 does not.
One other thing, this change should probably be reflected in the swagger docs. I don't see that as part of this PR, but it's been too long since I looked at swagger so I may just be missing something obvious.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh and one last thing: since 200 returns data, if this is really truly a desired change, could you add some data checks to these tests? Something like
-t DELETE libpod/containers/$cid 200
+t DELETE libpod/containers/$cid 200 length=1 .[0].Id=$cid
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I updated Swagger, Yes I screwed up. Before if you removed a container you got back no data, now you get back a list of containers that were removed or failed to be removed.
Basically if you send in the depend flag, the rm command can remove one or more containers.
Eventually we should move options.All to the server side as well.
2653799
to
cbdf7d6
Compare
This option causes Podman to not only remove the specified containers but all of the containers that depend on the specified containers. Fixes: containers#10360 Also ran codespell on the code Signed-off-by: Daniel J Walsh <[email protected]>
@edsantiago PTAL |
LGTM, feel free to merge manually. |
This option causes Podman to not only remove the specified containers
but all of the containers that depend on the specified
containers.
Fixes: #10360
Signed-off-by: Daniel J Walsh [email protected]