-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Respect alloc job version for lost/failed allocs #8691
Conversation
This change fixes a bug where lost/failed allocations are replaced by allocations with the latest versions, even if the version hasn't been promoted yet. Now, when generating a plan for lost/failed allocations, the scheduler first checks if the current deployment is in Canary stage, and if so, it ensures that any lost/failed allocations is replaced one with the latest promoted version instead.
} | ||
|
||
// Defensive check - if there is no appropriate deployment for this job, use the latest | ||
if job != nil && job.Version >= missing.MinJobVersion() && job.LookupTaskGroup(tg.Name) != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made few defensive checks here, where if we see unexpected state (e.g. jobs without expected TaskGroup, no non-promoted version), we'd fallback to using the latest version. This seems better than a panic, but not sure if we should simplify this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this unexpected? for jobs without update stanza, there won't be deployments, so that downgradedJobForPlacement
will return null
. (in that case, latest job is exactly what we want.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's unexpected. missing.DowngradeNonCanary()
should be always false.
strategy := tg.Update | ||
canariesPromoted := dstate != nil && dstate.Promoted | ||
requireCanary := numDestructive != 0 && strategy != nil && len(canaries) < strategy.Canary && !canariesPromoted | ||
requireCanary := (len(destructive) != 0 || (len(untainted) == 0 && len(migrate)+len(lost) != 0)) && | ||
strategy != nil && len(canaries) < strategy.Canary && !canariesPromoted |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is semi-related band-aid that we'll probably need to investigate further. The code here determines if canaries are needed by checking if we have any destructive update. However, if all allocations are dead (because the nodes are lost), len(destructive)
will be 0. I changed the condition to account for such scenario.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it might be nice to break this conditional up a bit, and capture some of what's going on here.
@@ -533,9 +533,12 @@ func (a *allocReconciler) computeGroup(group string, all allocSet) bool { | |||
}) | |||
a.result.place = append(a.result.place, allocPlaceResult{ | |||
name: alloc.Name, | |||
canary: false, | |||
canary: alloc.DeploymentStatus.IsCanary(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code here assumed that all alloc migrations are non-canary. An odd assumption.
// | ||
// Zero dstate.DesiredCanaries indicates that the TaskGroup allocates were updated in-place without using canaries. | ||
if dstate := d.TaskGroups[tgName]; dstate != nil && (dstate.Promoted || dstate.DesiredCanaries == 0) { | ||
job, err := s.state.JobByIDAndVersion(nil, ns, jobID, d.JobVersion) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we first compare d.JobVersion against s.job.Version and if they're equal: return nil since they're equivalent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's reasonable but also seems like a micro-optimization - I may consider it when addressing reviews.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work. I'm tempted to ask for refactoring the inner loop of computePlacements to make local variables and SetJob(...) state easier to follow, but I'm not sure it'd help readability.
Does this fix #8439? If so can we make the steps in that issue (or similar) into an e2e test? We didn't have the e2e infrastructure around when deployments were written, so it would be nice to backfill.
scheduler/generic_sched.go
Outdated
if job != nil { | ||
jobVersion = int(job.Version) | ||
} | ||
s.logger.Warn("failed to find appropriate job; using the latest", "expected_version", missing.MinJobVersion, "found_version", jobVersion) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our server logs are notoriously difficult for operators to determine how to react, so I'm wondering if there's something else we can do here:
Can we log differently if job is nil instead of using a sentinel value? I think they would improve clarity of that case considerably.
Can we lower the log level to debug? I'm unsure what use this log line is outside of development. If an invariant has failed perhaps we need to be more aggressive in our wording?
If there's anything an operator can and should do to remediate this, let's explicitly call it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll downgrade to debug.
s.logger.Warn("failed to find appropriate job; using the latest", "expected_version", missing.MinJobVersion, "found_version", jobVersion) | ||
} | ||
} | ||
|
||
// Check if this task group has already failed | ||
if metric, ok := s.failedTGAllocs[tg.Name]; ok { | ||
metric.CoalescedFailures += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to restore the stack's original Job here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It needs to happen below, after a placement is made - particularly after s.selectNextOption
is called. Will update the comment.
scheduler/generic_sched.go
Outdated
@@ -489,6 +541,11 @@ func (s *GenericScheduler) computePlacements(destructive, place []placementResul | |||
// Compute top K scoring node metadata | |||
s.ctx.Metrics().PopulateScoreMetaData() | |||
|
|||
// restore stack to use the latest job version again |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we doing this here? Might be worth including the reasoning in the comment as the actual code behavior is fairly obvious and trivial.
nomad/structs/structs.go
Outdated
// To save space, it clears the Job field so it can be derived from the plan Job. | ||
// If keepJob is true, the normalizatin skipped to accommodate cases where a plan | ||
// needs to support multiple versions of the same job. | ||
func (p *Plan) AppendAlloc(alloc *Allocation, keepJob bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps this should accept the job as an argument to simplify the one place it gets set to a non-nil value? Simplifies a couple checks and be as readable if not a tiny bit more?
Just an idea.
Yes! Will add an e2e in a follow up PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some comments, but i didn't see any problems with this fix.
i did not perform any manual testing.
strategy := tg.Update | ||
canariesPromoted := dstate != nil && dstate.Promoted | ||
requireCanary := numDestructive != 0 && strategy != nil && len(canaries) < strategy.Canary && !canariesPromoted | ||
requireCanary := (len(destructive) != 0 || (len(untainted) == 0 && len(migrate)+len(lost) != 0)) && | ||
strategy != nil && len(canaries) < strategy.Canary && !canariesPromoted |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it might be nice to break this conditional up a bit, and capture some of what's going on here.
} | ||
|
||
// Defensive check - if there is no appropriate deployment for this job, use the latest | ||
if job != nil && job.Version >= missing.MinJobVersion() && job.LookupTaskGroup(tg.Name) != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this unexpected? for jobs without update stanza, there won't be deployments, so that downgradedJobForPlacement
will return null
. (in that case, latest job is exactly what we want.)
scheduler/generic_sched.go
Outdated
if job != nil { | ||
jobVersion = int(job.Version) | ||
} | ||
s.logger.Warn("failed to find appropriate job; using the latest", "expected_version", missing.MinJobVersion, "found_version", jobVersion) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if this deserves a warning when job.Update.MaxParallel == 0
(i.e., no deployments)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if MaxParallel == 0, it will be canonicalized to job.Update = nil
, and downgrading will not be relevant, and this path isn't executed.
scheduler/generic_sched.go
Outdated
@@ -489,6 +541,11 @@ func (s *GenericScheduler) computePlacements(destructive, place []placementResul | |||
// Compute top K scoring node metadata | |||
s.ctx.Metrics().PopulateScoreMetaData() | |||
|
|||
// restore stack to use the latest job version again |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this makes me a little uncomfortable. maybe it feels a little fragile, to swap out the job and then have to un-swap it later?
i'm not sure i have a constructive criticism here; maybe, if it's not too expensive, we should drop the conditional and always restore.
@@ -489,6 +541,11 @@ func (s *GenericScheduler) computePlacements(destructive, place []placementResul | |||
// Compute top K scoring node metadata | |||
s.ctx.Metrics().PopulateScoreMetaData() | |||
|
|||
// restore stack to use the latest job version again | |||
if downgradedJob != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if downgradedJob != nil { | |
if *s.stack.jobVersion != s.job.Version { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe this is better?
i didn't see @schmichael's review before finishing mine, i had some of the same concerns, which i address in my review. |
To address review comments
`(alloc.DeploymentStatus == nil || !alloc.DeploymentStatus.IsCanary())` and `!alloc.DeploymentStatus.IsCanary()` are equivalent.
Co-authored-by: Chris Baker <[email protected]>
26dc560
to
f075bcc
Compare
I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions. |
This change fixes a bug where lost/failed allocations are replaced by
allocations with the latest versions, even if the version hasn't been
promoted yet.
Now, when generating a plan for lost/failed allocations, the scheduler
first checks if the current deployment is in Canary stage, and if so, it
ensures that any lost/failed allocations is replaced one with the latest
promoted version instead.
Implementation High Level
In high level, the fix makes the following changes:
allocPlaceResult
struct with downgrade indicator along with the minimum versionFAQ
Why use latest promoted or non-canary version? Why not latest job Stable version?
The promotion semantics is somewhat group specific. Consider a job has two TaskGroups: A, B in Version 0, and then Version 1 updated A significantly (requiring a Canary) but only updated B's count and metadata. B allocations should always use Version 1, and non-Canary A should use version 0. Using latest Stable job will not appropriate here
In this approach, does the scheduler respect the count and resources found in latest promoted deployment?
The scheduler will use the resources of the appropriate job version for the alloc, if the resources changed. However, the latest job Count field is the canonical value, and this PR doesn't change that. If a job has TG count=3 in Version 0, and changed count to 2 and requested large resources; immediately, a Version 0 allocation will be stopped, so we'd have 2 Version=0 allocations along with some canaries.
Can this PR handle cases where we reschedule a Canary and NonCanary failed allocations
Yes, when migrating allocations, the canary will be replaced by another canary, and non-canaries will be replaced by non-canary instances, each with the expected versions for them.
Fixes #8439