-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No port for job queue consumer #376
No port for job queue consumer #376
Conversation
just noticed this in console https://github.com/inviqa/harness-base-php/blob/0.10.x/src/_base/helm/app/templates/application/console/deployment.yaml#L46-L52 (also spryker jenkins-runner) |
not a firm approval given how we did it in other services
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be better to make this consistent with existing non-http containers like app-init/migrate/console, which use a command
I disagree that a pod should restart if the database is down, that's cascading the failure, but we can make it test if the process we wish to run is running. |
it may be for other reasons than a database being down, but instead a node network link failure, or db credentials incorrect. We treat normal apps as unhealthy when their connections are down not just because of the experience to users the other containers I mention do this, so your opinion is with respect to them also and you can change the state command as another PR |
but either way I think it should be something reportable rather than just logging only when a worker job has run, as there are more cases of db connection failure than just the db itself being down |
I don't think it's worth checking the process we're running is running, as docker/tini should shutdown the container/pod if it exits (unless you can think of other cases?) |
Forgot this was just a readiness probe so it won't get restarted upon a failure of the readinessProbe, just put to unready. This container gets no traffic via a service, so the status of being ready or not does not matter. Doesn't make sense to check for processes in a readinessProbe, so will convert to check if database is available as recommended. Supervisor still in use for this image as it's based on php-fpm, so any application crashes will get restarted. What we are missing is any kind of consequence for crashing more than X number of times in a row, which would get the pod crashing and rescheduled elsewhere, e.g. https://github.com/continuouspipe/dockerfiles/blob/master/ubuntu/16.04/etc/supervisor/conf.d/kill_supervisord_upon_fatal_process_state.conf + https://github.com/continuouspipe/dockerfiles/blob/master/ubuntu/16.04/usr/local/share/supervisord/kill_supervisord_upon_fatal_process_state.py |
68147ac
to
6732c07
Compare
initialDelaySeconds: 5 | ||
exec: | ||
command: | ||
- /bin/readiness.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, I had thought the app/sidekick stuff (so app state
) was copied into php-fpm, but seems not
Oops, |
This reverts commit 6732c07.
Fixes a readinessProbe failure, as the only processes inside are: