-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
play kube: cannot restart pod with named volumes #16348
Comments
I am also looking at a similar issue. The root cause of this failure is the fact that I implemented the code to remove the volume as part of calling Having said that, I'm not sure which behavior you are looking for - deleting the volumes or reusing them. |
the idea was that podman should not delete important information by default ... in this use case, it does make sense to nuke the volume I guess. But in the case where the user is expecting the volume data to be preserved, it does not ... should we just not error if the volume already exists by name? that would leave the responsibility of what to do with the stuff on the volume to the user. we could also add a flag for tearing down volumes optionally as well? |
That sounds good. @alexlarsson added the --ignore flag, so the plumbing should be easy. BUT: How does Kubernetes behave?
Note that there is |
Tackling this from a different direction: why does |
We've just been talking through that on gchat. Current proposal (please correct me if this is inaccurate @baude) is to create volumes if they don't exist, use volumes if they do exist at start time, and add a --force flag to the "down" command to delete volumes if the user wants to remove them. |
Presumably the systemd integration will not add the "--force" flag by default? This means the default case for systemd (systemctl stop/restart) service will work, as will a reboot; an edge case would be if the node loses power; but in such a situation I think it's reasonable to at least try to start the service with existing volumes; if that fails that can be detected, and the volumes can be cleared with --force or another out-of-band teardown mechanism. "kube play" would then be able to proceed in the normal "no existing volumes" case by creating them |
ACK. Unless we find a compelling reason, it seems like a sane default to not remove volumes. We plan on adding Kube support to Quadlet, where users should be able to specify whether they want volumes to be removed. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I am deploying the following YAML file via play kube:
Steps to reproduce the issue:
podman play kube deploy-megamek.yaml
podman play kube deploy-megamek.yaml --down
podman play kube deploy-megamek.yaml
Describe the results you received:
In step 3, an error like the following occurs:
Describe the results you expected:
I expected the pod to re-start. It seems important that, for example, when re-starting a node running services via the systemd integration, that pods with volumes should be able to shutdown and restart gracefully
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info
:Package info (e.g. output of
rpm -q podman
orapt list podman
orbrew info podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
This is running in a KVM VM on a Fedora 37 hypervisor
The text was updated successfully, but these errors were encountered: