-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
image healthchecks not being used (as rootless, user systemd service) #5680
Comments
@baude PTAL |
@stefanb2 @giuseppe per this PR it is implied that this should be working, however i can't replicate the tests in that PR in my environment - can you help me figure out what i'm missing that i'm not seeing the same result? (first, i've removed any service files i previously created with and starting with a fresh environment
i start podman (note however i am not specifying a
then i do the query performed in the PR:
the container is running however:
also including what the output a status query on the user session looks like:
curiously, we see state as degraded?
I'm not sure if it's these failed scope that are related to the issue? |
actually those checks also didn't work on another system where the scope issue does not exist, so that's likely unrelated |
@baude any chance you've been able to take a look or provide any advice please? |
I have the same issue... The Health check is being ignored in the Dockerfile :/ |
Are health checks only supposed if the image is built with --format docker. I am not sure if OCI supports Healthchecks. If not then perhaps the health checks do not work because of this? @baude WDYT? |
Healthchecks are indeed docker-only |
the image i'm using (in rootless mode) is a docker image, health checks still are being ignored |
correct, oci does not support healthchecks ... we work around this in podman by allowing users to define them. i dont think this is the problem as @aleks-mariusz states his image is a docker format image. |
A friendly reminder that this issue had no activity for 30 days. |
Is this still broken in podman 1.9.2? |
Looks like it. Tried with --format=docker, and it used the cache :hmm:. However, after nuking all images (with |
Since this is a buildah issue, closing. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I am using a container from dockerhub that has healthchecks defined in it.
The container is a dns proxy (for supporting DNS requests over HTTPS), however at some point, the container stops responding to requests and is not working any longer (for unknown reason) and rather than try to diagnose it, it happens somewhat regularly/predictably often enough that i feel it's easier just to restart the container, which is what healthchecks are designed to do. the healthcheck performs a query and so should catch this exact situation.
I cannot however for the life of me get this health-checking functionality to actually work. and i'm not entirely sure this is supported in rootless? It seems it should be, and I've searched the docs for how it should be utilizing systemd timers to perform the check and if necessary restart the container, but there's limited available articles on using this facility in this scenario (mainly just this one), none of which what is described is what i'm actually experiencing in reality.
So either there's a defect here somewhere or i'm simply doing something wrong (entirely possible).
Please help me figure out which it is :-)
Steps to reproduce the issue:
Describe the results you received:
container continues to run/exist/live, however in a broken state :-(
Describe the results you expected:
container should be restarted automatically
Additional information you deem important (e.g. issue happens only occasionally):
container seems to stop functioning after a few hours:
here's what it looks like when it works..
..and when it stops working
Output of
podman version
:**Output of `podman info --debug`:**
Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
This is running on libvirt Ubuntu 18.04 LTS (Bionic) vm running linux kernel
5.3.0-42-generic
.output of `podman inspect`
As visible from the above inspect output, the container is an unhealthy state currently.. it's not however being restarted. it even lists when it failed the healthchecks (4 instances, 3 of them from previous manual restarts).
additionally, here's what the service unit file i created is
and the output of systemctl --user status cloudflared
The text was updated successfully, but these errors were encountered: