Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

keycloak_quarkus role restart all node at the same times #182

Closed
roumano opened this issue Mar 28, 2024 · 1 comment · Fixed by #231
Closed

keycloak_quarkus role restart all node at the same times #182

roumano opened this issue Mar 28, 2024 · 1 comment · Fixed by #231
Labels
enhancement New feature or request

Comments

@roumano
Copy link

roumano commented Mar 28, 2024

SUMMARY

If a configuration change, actually the keycloak_quarkus role restart all node at the same time.
So the keycloak service is down for this (small) period.

It's also introduce a problem if try to apply a bad configuration :
all node will be change and restart so the whole keycloak service will be down until a correction is done
as exemple, if i change the ssl file to a not existing file (or permission issue or ... ) with :

keycloak_quarkus_key_file: "/etc/ssl/private/not_existing_file.key.pem"

All keycloak node will be down, so the service will be down ...

I think, we should, at least, introduce a throttle or forks on restart service
Or even better apply change on first node before other node

Personally i like how restart have been implemented on this role : https://github.com/mrlesmithjr/ansible-mariadb-galera-cluster/blob/master/tasks/setup_cluster.yml :

  • it's not use notify but register: "_mariadb_galera_cluster_reconfigured"
  • then apply change to first node only :
- name: setup_cluster | cluster rolling restart - apply config changes (first node)
  ansible.builtin.include_tasks: manage_node_state.yml
  • then (if successfully restart) restart other :
- name: setup_cluster | cluster rolling restart - apply config changes (other nodes)
  ansible.builtin.include_tasks: manage_node_state.yml

So with this solution it's resolving both issue i've describe earlier

ISSUE TYPE
  • Bug Report
ANSIBLE VERSION
core 2.14.1
COLLECTION VERSION
middleware_automation.keycloak 2.1.0  

STEPS TO REPRODUCE
  • change a configuration like on keycloak_quarkus_frontend_url
  • run the playbook with the role middleware_automation.keycloak.keycloak_quarkus
EXPECTED RESULTS
  • all keycloak service need to be restarted one after the other
ACTUAL RESULTS
  • all keycloak service will restart at the same time and without any error handling
ADDITIONAL INFORMATION

also, i think start.yml and restart.yml need to be merge ( state: restarted is always doing a state: started and more ) and the "Wait until {{ keycloak.service_name }} becomes active {{ keycloak.health_url }}" need also to be used on the restart behavior.

@guidograzioli
Copy link
Member

Thanks for reporting; we already have similar logic for keycloak role, not yet ported to keycloak_quarkus. Ideally we would like to support custom restart orchestration (provided by users from the calling playbooks) in addition to the default; but yeah implementation is in the roadmap for both throttled restarts and wait_for_healthy conditions.

@guidograzioli guidograzioli added the enhancement New feature or request label Mar 28, 2024
hwo-wd added a commit to world-direct/ansible-keycloak that referenced this issue May 14, 2024
hwo-wd added a commit to world-direct/ansible-keycloak that referenced this issue May 14, 2024
hwo-wd added a commit to world-direct/ansible-keycloak that referenced this issue May 15, 2024
hwo-wd added a commit to world-direct/ansible-keycloak that referenced this issue May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants