You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Hashicorp, thanks for the great job. I have a question very similar to this issue:#5856
In my case, I want to use nomad to manage a cluster running multiple RocketMQ clusters. A working cluster contains one master broker and two slave brokers, each one contains tons of data and keep an In Sync Replica set. And I found updating an existing task will cause allocation migrate to another node, which may cost hours to sync data to make new broker in ISR set.
I used spread and distinct_property constraint to ensure that all my brokers are deployed in different zones, so is there anyway to help me keeping task on origin node on rolling update, meanwhile migrate to new node when node is down?
Many thanks!
Reproduction steps
Job file (if appropriate)
I use terraform manage my nomad job, so the code actually is a terraform template file.
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Nomad version
0.9.3
Operating system and Environment details
Issue
Hi Hashicorp, thanks for the great job. I have a question very similar to this issue:#5856
In my case, I want to use nomad to manage a cluster running multiple RocketMQ clusters. A working cluster contains one master broker and two slave brokers, each one contains tons of data and keep an In Sync Replica set. And I found updating an existing task will cause allocation migrate to another node, which may cost hours to sync data to make new broker in ISR set.
I used spread and distinct_property constraint to ensure that all my brokers are deployed in different zones, so is there anyway to help me keeping task on origin node on rolling update, meanwhile migrate to new node when node is down?
Many thanks!
Reproduction steps
Job file (if appropriate)
I use terraform manage my nomad job, so the code actually is a terraform template file.
Nomad Client logs (if appropriate)
Nomad Server logs (if appropriate)
The text was updated successfully, but these errors were encountered: