-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unhealthy pod(created via podman play kube) doesn't restart #14505
Comments
Restart policy doesn't include health checks - it only restarts on the container actually exiting. Adding integration between healthchecks and restart policy would be a new feature. |
@mheon pkg/specgen/generate/kube/kube.go
in case if restart policy equals "onfailure" or "alway" it should be "kill 1" according this code:
in fact it is "exit 1" when RestartPolicy: OnFailure in kube definition. |
This is also not supported yet. I'm working on a related feature (startup probes) and I can look into this once I'm done. |
@mheon Why it is not supported if i can see this code in v4.1.0 tag?
might be "onfailure" should be "on-failure"? |
There is no handling for |
@mheon
it is exactly what i need.. Append "kill 1" to cmd whtn restartPolicy always or onfailure.. |
Changes: - use --timestamp option to produce 'created' stamps that can be reliably tested in the image-history test - podman now supports manifest & multiarch run, so we no longer need buildah - bump up base alpine & busybox images This turned out to be WAY more complicated than it should've been, because: - alpine 3.14 fixed 'date -Iseconds' to include a colon in the TZ offset ("-07:00", was "-0700"). This is now consistent with GNU date's --iso-8601 format, yay, so we can eliminate a minor workaround. - with --timestamp, all ADDed files are set to that timestamp, including the custom-reference-timestamp file that many tests rely on. So we need to split the build into two steps. But: - ...with a two-step build I need to use --squash-all, not --squash, but: - ... (deep sigh) --squash-all doesn't work with --timestamp (containers#14536) so we need to alter existing tests to deal with new image layers. - And, long and sordid story relating to --rootfs. TL;DR that option only worked by a miracle relating to something special in one specific test image; it doesn't work with any other images. Fix seems to be complicated, so we're bypassing with a FIXME (containers#14505). And, unrelated: - remove obsolete skip and workaround in run-basic test (dating back to varlink days) - add a pause-image cleanup to avoid icky red warnings in logs Fixes: containers#14456 Signed-off-by: Ed Santiago <[email protected]>
A friendly reminder that this issue had no activity for 30 days. |
@mheon this Issue is waiting on feedback from you? |
A friendly reminder that this issue had no activity for 30 days. |
@mheon Reminder |
Trying to remember what was going on here... It looks like IMO - we should just add support for restarting on healthcheck failure and remove this. It doesn't seem to be working as advertised in this case, at least. |
A friendly reminder that this issue had no activity for 30 days. |
We now have healthcheck restart capability. I believe that solves the problem. Reopen if I am mistaken. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
When pod is created via "podman play kube" with livness probe - it is always running with "unhealthy" status.
I checked a code and found:
So i inspected my container(created with 'play kube') and it has a restart policy:
and "exit 1" failureCmd instead "kill 1"
Steps to reproduce the issue:
Create Kubernetes YAML file with liveness probe and restartPolicy: OnFailure
Play kube
Describe the results you received:
Container always running with unhealthy status..
Describe the results you expected:
unhealthy container should be killed..
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
The text was updated successfully, but these errors were encountered: