-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes: #236 - Running Zammad with replicas > 1 #243
Conversation
@klml @monotek this is not finished yet, but I would like to ask if you want to try this and provide first feedback. Works here with nginx and railsserver scaled to
|
@mgruner Thank you very much. Openshift is working perfectly. We are still doing a few technical tests, but it looks very good. HA works flawlessly. I just simulated a data center failure and we survived it without any measurable downtime ;) Moving the inits to the k8s job is great, makes the rollout so much faster and also more stable, as I don't necessarily have to restart nginx and rails during a helm update. |
sorry for the delay. |
Thanks for the update @monotek. Looking forward to your feedback! |
@monotek just added the custom TMP handling to all deployment pods, as it turned out that not only the railsserver needs to be able to create temporary files. |
@mgruner from our side this branch works still very well. its running stable (and faster) on testing since before christmas 👍 |
@monotek and I had a very productive call about this matter. We agreed that it is the right direction and will bring a major improvement for Zammad users on k8s. However, there is an important consideration we need to make here. Zammad 6.2 currently requires Therefore we propose the following procedure and intermediate steps:
|
Despite the fact that the branch is running, we had the problem today that a manual rake zammad:searchindex:rebuild runs into the problem like #212 ;) |
Perfekt for us. 👍 |
I looked into this and found an issue with a missing entry point in the elasticsearch-init job container. This should be fixed by 409937d. Can you let me know if this helps, or otherwise send details about the error, please? It cannot be the same issue as in #212 because the StaticAssets handling is no longer present in Zammad 6.2. |
409937d
to
cffcd8b
Compare
@klml @monotek the recent commit drops the creation of an internal PVC, and requires an @klml this will not work any more correctly with Zammad 6.2 as it has no volume for |
@mgruner thanks for leting me know. We had this branch only on dev-environment, and wait for 6.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mgruner
Nice, that the "var" dir issue is fixed 👍
From the code it looks good to me now :)
Had no time to test it by myself though.
7489495
to
c74c8d1
Compare
@klml can you please have a look at f3bfc0b? This should fix the issue. It makes the volume-permissions container's command configurable, so that you can replace it with something that works in OpenShift. I would suggest to use Please let me know if this works and if the updated description for OpenShift in the Readme is correct now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After checking again, I need to ask: can you explain this please? We already use static container names. Only the pod names are dynamic because they are controlled by the deployments, and this is probably by design. Did I miss something here?
Sorry, i was confused because the "{{ .Chart.Name }}" var is used in the container name. Example:
- name: {{ .Chart.Name }}-nginx |
The var is "zammad" all the time and should not change.
Therefore lets keep it as it is.
For the rest i'm also ok with keeping it for now and change later, if somebody complains :)
The "/opt/zammad/tmp/tmp" dir does look a bit weird. Can we use "/opt/zammad/var/tmp" again?
I don't think so, as there is no Alternatively @klml you could try not modifying the |
I would prefer the in memory workaround too, so we can just use "/opt/zammad/tmp". |
Great And I removed my and I run with
I tested this on
|
ingress/routes is missingingress/routes get removed on an existing instance and an fresh deployment
and
|
@mgruner ingress/routes were missing, because the ingress still listened to the dynamic
|
…est scenario for chart-testing.
… OpenShift. Improve Readme.
f3bfc0b
to
b88b569
Compare
@mgruner works now oob ;) let me get the approval from my the functional department, then we can merge. |
@mgruner looks good to me! thank you very much for this 🙏 |
I agree and opend #265 |
What this PR does / why we need it
StatefulSet
into 4Deployments
and oneJob
.Deployments
are freely scalable, the scheduler and websocket must remain atreplicas: 1
Job
will be re-created on any chart update (via uuid in the name) and run the migrations. Deployments will fail until migrations are executed.Depends on
Which issue this PR fixes
replicas > 1
#236Special notes for your reviewer
Open questions / issues:
ReadWriteMany
access from now on. This may complicate deployments.StatefulSet
was used, because a new PVC will be created.Checklist
Open Tasks