-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scheduler: datacenter updates should be destructive #10864
Conversation
Updates to the datacenter field should be destructive for any allocation that is on a node no longer in the list of datacenters, but inplace for any allocation on a node that is still in the list. Add a check for this change to the system and generic schedulers after we've checked the task definition for updates and obtained the node for each current allocation.
7461a56
to
00fb80f
Compare
// The alloc is on a node that's now in an ineligible DC | ||
if !helper.SliceStringContains(job.Datacenters, node.Datacenter) { | ||
continue | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I noted in #10746 (comment) I'm not wild about the placement of this check, because it means we can't really test the behavior except in the whole-scheduler tests as I've done in TestServiceSched_JobModify_Datacenters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
scheduler: datacenter updates should be destructive
I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions. |
Fixes #10746
Updates to the datacenter field should be destructive for any allocation that
is on a node no longer in the list of datacenters, but inplace for any
allocation on a node that is still in the list. Add a check for this change to
the system and generic schedulers after we've checked the task definition for
updates and obtained the node for each current allocation.