-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[#1587] Adds connected worker info. to the healthcheck #1588
Conversation
@seanpreston please note that I made an additional change here! I updated the description and the |
…into sp/1587/worker-healthcheck
@seanpreston woops, I also realized there's no test for this either |
@seanpreston can you update the health endpoint tests for this so its more clear what to expect the responses to look like? (tbf I think there is a single one Edit: nevermind I'm up, i'll update the test real quick |
@seanpreston given the logic that unhealthy workers aren't shown, why show "ok"? I'm updating the code here but feel free to revert if you think it isn't more intuitive |
…into sp/1587/worker-healthcheck
Closes #1587
Code Changes
"workers": {...}
to the healthcheck response to reflect connectivity of any workers connected to the Celery backendworker
service back into the main compose file so that it's easier to spin things up-- worker
posarg to thedev
command that spins up the worker with the applicationSteps to Confirm
nox -s dev
workers
key is present at http://localhost:8080/healthtask_always_eager=false
incelery.toml
nox -s dev -- worker
workers
key is present at http://localhost:8080/health and contains one resultWithout workers configured
With workers configured and healthy
With workers configured but not healthy (manually kill the worker container)
Pre-Merge Checklist
CHANGELOG.md
Description Of Changes
Right now this solution will:
/health
for an accurate healthcheck of that container..ping()
response of{"ok": "pong"}
. Alternatively there's a.stats()
inspect method which could be used to show:pid
of the worker on the container it's running on,uptime
of the process,pool
data,backend
connection details — this seemed like overkill in the context of the other checks returning a simple "healthy".