-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support max per key constraint #1146
Comments
Hey, This is a great idea. Global constraints like this is something we want in the future but are not actively working on in the short term. |
@dadgar any chance to bump this up? Spreading things evenly among racks, coupled with nomad's aggressive preference to bin-pack, makes this a tough spot for services requiring redundancy. |
So, neither |
I would love to have to never run more than3 allocs on the job per az - or even better for my usecase
|
At the time this was filed, |
That would be so badass ! |
Agreed this would be splendid. Use case is similar to what @jippi highlights with @autocracy my reading of your comment is that it would solve @jippi's first case, but not the case I just stated (which seems like a case most people running services would want). Is that accurate? |
@jippi When you say |
@dvusboy yes :) |
@timperrett Sure. Will try to get this in 0.6.1 |
@dadgar you sir, are a gent! that would be awesome!!! |
And it's not just Alex, all the Nomad devs have been extremely responsive and helpful. |
This PR enhances the distinct_property constraint such that a limit can be specified in the RTarget/value parameter. This allows constraints such as: ``` constraint { distinct_property = "${meta.rack}" value = "2" } ``` This restricts any given rack from running more than 2 allocations from the task group. Fixes #1146
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
The simple version of this is that each rack with in our datacenter will have its own separate meta.rack attribute. By example, we might have 10 task groups to be run, but with a maximum of 3 scheduled on any given rack.
The text was updated successfully, but these errors were encountered: