-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Auto-update with pods may fail if there's only one container in the pod #17181
Comments
@vrothberg PTAL |
Thanks for reaching out, @saiarcot895. Could you share a reproducer? |
Sure, here's the systemd files for the pod and container:
This assumes that a subuid and subgid entry for Once this is set up, when there is an update, I do have two additional lines in both of the above files that I didn't include above:
This is to reload my nftables rules. I'm assuming that this doesn't have an impact on the bug happening. |
@vrothberg Were you able to repro this issue? |
Using podman v4.4.0 (rootless) I am experiencing the same problem. After I build a new image locally, running |
@saiarcot895, I did not find time to look into the issue yet. I'll update this issue once I do. |
I have the same issue with linuxserver/bookstack pod. It never auto-updated itself. I have to manually pull latest image and restart pod in systemd. |
I think it also fails to update and rolls back from time to time in cases with more than one container in the pod. |
Thanks everybody for the input and the patience. I opened #17508 to fix the issue. |
Note that I had to revisit the initial version of the PR. Auto updating a container inside a pod will always render the entire pod to be restarted - similar to what's happening in the |
Support auto updating containers running inside pods. Similar to containers, the systemd units need to be generated via `podman-generate-systemd --new $POD` to generate the pod's units. Note that auto updating a container inside a pod will restart the entire pod. Updates of multiple containers inside a pod are batched, such that a pod is restarted at most once. That is effectively the same mechanism for auto updating containers in a K8s YAML via the `podman-kube@` template or via Quadlet. Updating a single container unit without restarting the entire pod is not possible. The reasoning behind is that pods are created with --exit-policy=stop which will render the pod to be stopped when auto updating the only container inside the pod. The (reverse) dependencies between the pod and its containers unit have been carefully selected for robustness. Changes may entail undesired side effects or backward incompatibilities that I am not comfortable with. Fixes: containers#17181 Signed-off-by: Valentin Rothberg <[email protected]>
Issue Description
My setup consists of multiple pods running one or more containers. All of the pods have user namespacing (or at least a UID/GID map) and a custom network specified. All containers have auto-update enabled. During the auto-update, if there's a container that needs to be updated, and that container is the only one running in that pod, then the auto-update might fail, because it seems that when the container is getting restarted by systemd, it's trying to bring down the pod at the same time. Therefore, the container gets rolled back.
Steps to reproduce the issue
Describe the results you received
The container was rolled back instead of getting updated.
Describe the results you expected
The container should've been updated successfully
podman info output
Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
Yes
Additional environment details
Ubuntu 22.04 with systemd 249
Additional information
Logs around the time of the update (logs from other containers have been excluded):
The text was updated successfully, but these errors were encountered: