-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow force removing containers with unavailable OCI runtime #3502
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: marcov The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @marcov. Thanks for your PR. I'm waiting for a containers or openshift member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
When the OCI runtime associated to a container could not be found in the configuration, assign the default runtime to the container and return the ErrRuntimeUnavailable error. Signed-off-by: Marco Vedovati <[email protected]>
When the OCI runtime associated to a container could not be found in the configuration, removing that container is not possible. Allow force removing the container by detecting when this happens. Signed-off-by: Marco Vedovati <[email protected]>
f85e755
to
52990c9
Compare
/ok-to-test |
return errors.Wrapf(define.ErrInternal, "container %s was created with OCI runtime %s, but that runtime is not available in the current configuration", ctr.ID(), ctr.config.OCIRuntime) | ||
err = errors.Wrapf(define.ErrRuntimeUnavailable, "cannot find OCI runtime %q for container %s", ctr.config.OCIRuntime, ctr.ID()) | ||
// fall back to the default runtime to allow ctr clean up | ||
ociRuntime = s.runtime.defaultOCIRuntime |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem sane; we can't reasonably expect the default runtime to be able to do anything.
return errors.Wrapf(define.ErrInternal, "container %s was created with OCI runtime %s, but that runtime is not available in the current configuration", ctr.ID(), ctr.config.OCIRuntime) | ||
err = errors.Wrapf(define.ErrRuntimeUnavailable, "cannot find OCI runtime %q for container %s", ctr.config.OCIRuntime, ctr.ID()) | ||
// fall back to the default runtime to allow ctr clean up | ||
ociRuntime = s.runtime.defaultOCIRuntime | ||
} | ||
ctr.ociRuntime = ociRuntime | ||
} | ||
|
||
ctr.runtime = s.runtime | ||
ctr.valid = valid |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should never return a valid container if err != nil
I understand the intent here, but I don't know if trying to convince the existing remove API to work with containers which completely lack a valid runtime is sane. We may want a new API to forcibly evict a container from the DB and storage with minimal checks. Retrieve it from the DB; if there's an error doing so, skip everything else and try removing from the DB. If we can successfully retrieve the container, remove from storage. |
thanks @mheon, you're right, I felt my changes were kindof hacky instead of a proper solution. I'll evaluate what you proposed and see if I can come back with something better. |
@mheon following your advice a possible way to handle this is:
Any feedback? |
I have some thoughts about the CLI here (I think allowing My thought here would be that |
I'm closing this and continuing in a new PR to keep things cleaner: #3549 |
When the OCI runtime associated to a container could not be found in the configuration, removing that container is not possible. Allow force removing the container by detecting when this happens.
This typically happens when you create a container with a specific runtime in libpod.conf, and later the runtime name is changed in the the conf, with outcome: