Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Respect alloc job version for lost/failed allocs #8691

Merged
merged 5 commits into from
Aug 25, 2020

Conversation

notnoop
Copy link
Contributor

@notnoop notnoop commented Aug 19, 2020

This change fixes a bug where lost/failed allocations are replaced by
allocations with the latest versions, even if the version hasn't been
promoted yet.

Now, when generating a plan for lost/failed allocations, the scheduler
first checks if the current deployment is in Canary stage, and if so, it
ensures that any lost/failed allocations is replaced one with the latest
promoted version instead.

Implementation High Level

In high level, the fix makes the following changes:

  1. First, when rescheduling (or migrating) non-canary allocations, the reconciler marks the resulting allocPlaceResult struct with downgrade indicator along with the minimum version
  • the minimum version is used to ensure that no allocations is accidentally rollback when rescheduling
  1. When making the placement and computing resources, the generic scheduler finds the latest non-canary deployment associated with the task group and uses it for scheduling purposes
  2. When inserting the resulting alloc in the plan, we ensure that the job info is attached to the alloc.
  • Typically, we don't populate Allocation.Job field in the plan, as it's typically duplicated, and nomad FSM ensures that the alloc.Job field is populated from plan. Here, we want the alloc to differ from the plan one
  1. When applying the plan results to the FSM, nomad will respect the alloc.Job field and avoid populating it
  • This is already current behavior, no changes there, but the test ensures that's the case

FAQ

Why use latest promoted or non-canary version? Why not latest job Stable version?

The promotion semantics is somewhat group specific. Consider a job has two TaskGroups: A, B in Version 0, and then Version 1 updated A significantly (requiring a Canary) but only updated B's count and metadata. B allocations should always use Version 1, and non-Canary A should use version 0. Using latest Stable job will not appropriate here

In this approach, does the scheduler respect the count and resources found in latest promoted deployment?

The scheduler will use the resources of the appropriate job version for the alloc, if the resources changed. However, the latest job Count field is the canonical value, and this PR doesn't change that. If a job has TG count=3 in Version 0, and changed count to 2 and requested large resources; immediately, a Version 0 allocation will be stopped, so we'd have 2 Version=0 allocations along with some canaries.

Can this PR handle cases where we reschedule a Canary and NonCanary failed allocations

Yes, when migrating allocations, the canary will be replaced by another canary, and non-canaries will be replaced by non-canary instances, each with the expected versions for them.

Fixes #8439

This change fixes a bug where lost/failed allocations are replaced by
allocations with the latest versions, even if the version hasn't been
promoted yet.

Now, when generating a plan for lost/failed allocations, the scheduler
first checks if the current deployment is in Canary stage, and if so, it
ensures that any lost/failed allocations is replaced one with the latest
promoted version instead.
@notnoop notnoop requested review from schmichael and cgbaker August 19, 2020 14:17
}

// Defensive check - if there is no appropriate deployment for this job, use the latest
if job != nil && job.Version >= missing.MinJobVersion() && job.LookupTaskGroup(tg.Name) != nil {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made few defensive checks here, where if we see unexpected state (e.g. jobs without expected TaskGroup, no non-promoted version), we'd fallback to using the latest version. This seems better than a panic, but not sure if we should simplify this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this unexpected? for jobs without update stanza, there won't be deployments, so that downgradedJobForPlacement will return null. (in that case, latest job is exactly what we want.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's unexpected. missing.DowngradeNonCanary() should be always false.

strategy := tg.Update
canariesPromoted := dstate != nil && dstate.Promoted
requireCanary := numDestructive != 0 && strategy != nil && len(canaries) < strategy.Canary && !canariesPromoted
requireCanary := (len(destructive) != 0 || (len(untainted) == 0 && len(migrate)+len(lost) != 0)) &&
strategy != nil && len(canaries) < strategy.Canary && !canariesPromoted
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is semi-related band-aid that we'll probably need to investigate further. The code here determines if canaries are needed by checking if we have any destructive update. However, if all allocations are dead (because the nodes are lost), len(destructive) will be 0. I changed the condition to account for such scenario.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might be nice to break this conditional up a bit, and capture some of what's going on here.

@@ -533,9 +533,12 @@ func (a *allocReconciler) computeGroup(group string, all allocSet) bool {
})
a.result.place = append(a.result.place, allocPlaceResult{
name: alloc.Name,
canary: false,
canary: alloc.DeploymentStatus.IsCanary(),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code here assumed that all alloc migrations are non-canary. An odd assumption.

//
// Zero dstate.DesiredCanaries indicates that the TaskGroup allocates were updated in-place without using canaries.
if dstate := d.TaskGroups[tgName]; dstate != nil && (dstate.Promoted || dstate.DesiredCanaries == 0) {
job, err := s.state.JobByIDAndVersion(nil, ns, jobID, d.JobVersion)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we first compare d.JobVersion against s.job.Version and if they're equal: return nil since they're equivalent?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's reasonable but also seems like a micro-optimization - I may consider it when addressing reviews.

Copy link
Member

@schmichael schmichael left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work. I'm tempted to ask for refactoring the inner loop of computePlacements to make local variables and SetJob(...) state easier to follow, but I'm not sure it'd help readability.

Does this fix #8439? If so can we make the steps in that issue (or similar) into an e2e test? We didn't have the e2e infrastructure around when deployments were written, so it would be nice to backfill.

if job != nil {
jobVersion = int(job.Version)
}
s.logger.Warn("failed to find appropriate job; using the latest", "expected_version", missing.MinJobVersion, "found_version", jobVersion)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our server logs are notoriously difficult for operators to determine how to react, so I'm wondering if there's something else we can do here:

Can we log differently if job is nil instead of using a sentinel value? I think they would improve clarity of that case considerably.

Can we lower the log level to debug? I'm unsure what use this log line is outside of development. If an invariant has failed perhaps we need to be more aggressive in our wording?

If there's anything an operator can and should do to remediate this, let's explicitly call it out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll downgrade to debug.

s.logger.Warn("failed to find appropriate job; using the latest", "expected_version", missing.MinJobVersion, "found_version", jobVersion)
}
}

// Check if this task group has already failed
if metric, ok := s.failedTGAllocs[tg.Name]; ok {
metric.CoalescedFailures += 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to restore the stack's original Job here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It needs to happen below, after a placement is made - particularly after s.selectNextOption is called. Will update the comment.

@@ -489,6 +541,11 @@ func (s *GenericScheduler) computePlacements(destructive, place []placementResul
// Compute top K scoring node metadata
s.ctx.Metrics().PopulateScoreMetaData()

// restore stack to use the latest job version again
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we doing this here? Might be worth including the reasoning in the comment as the actual code behavior is fairly obvious and trivial.

// To save space, it clears the Job field so it can be derived from the plan Job.
// If keepJob is true, the normalizatin skipped to accommodate cases where a plan
// needs to support multiple versions of the same job.
func (p *Plan) AppendAlloc(alloc *Allocation, keepJob bool) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps this should accept the job as an argument to simplify the one place it gets set to a non-nil value? Simplifies a couple checks and be as readable if not a tiny bit more?

Just an idea.

@notnoop
Copy link
Contributor Author

notnoop commented Aug 21, 2020

Does this fix #8439? If so can we make the steps in that issue (or similar) into an e2e test? We didn't have the e2e infrastructure around when deployments were written, so it would be nice to backfill.

Yes! Will add an e2e in a follow up PR.

Copy link
Contributor

@cgbaker cgbaker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some comments, but i didn't see any problems with this fix.

i did not perform any manual testing.

strategy := tg.Update
canariesPromoted := dstate != nil && dstate.Promoted
requireCanary := numDestructive != 0 && strategy != nil && len(canaries) < strategy.Canary && !canariesPromoted
requireCanary := (len(destructive) != 0 || (len(untainted) == 0 && len(migrate)+len(lost) != 0)) &&
strategy != nil && len(canaries) < strategy.Canary && !canariesPromoted
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might be nice to break this conditional up a bit, and capture some of what's going on here.

scheduler/reconcile.go Outdated Show resolved Hide resolved
}

// Defensive check - if there is no appropriate deployment for this job, use the latest
if job != nil && job.Version >= missing.MinJobVersion() && job.LookupTaskGroup(tg.Name) != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this unexpected? for jobs without update stanza, there won't be deployments, so that downgradedJobForPlacement will return null. (in that case, latest job is exactly what we want.)

if job != nil {
jobVersion = int(job.Version)
}
s.logger.Warn("failed to find appropriate job; using the latest", "expected_version", missing.MinJobVersion, "found_version", jobVersion)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if this deserves a warning when job.Update.MaxParallel == 0 (i.e., no deployments)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if MaxParallel == 0, it will be canonicalized to job.Update = nil, and downgrading will not be relevant, and this path isn't executed.

@@ -489,6 +541,11 @@ func (s *GenericScheduler) computePlacements(destructive, place []placementResul
// Compute top K scoring node metadata
s.ctx.Metrics().PopulateScoreMetaData()

// restore stack to use the latest job version again
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this makes me a little uncomfortable. maybe it feels a little fragile, to swap out the job and then have to un-swap it later?

i'm not sure i have a constructive criticism here; maybe, if it's not too expensive, we should drop the conditional and always restore.

@@ -489,6 +541,11 @@ func (s *GenericScheduler) computePlacements(destructive, place []placementResul
// Compute top K scoring node metadata
s.ctx.Metrics().PopulateScoreMetaData()

// restore stack to use the latest job version again
if downgradedJob != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if downgradedJob != nil {
if *s.stack.jobVersion != s.job.Version {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe this is better?

@cgbaker
Copy link
Contributor

cgbaker commented Aug 21, 2020

Great work. I'm tempted to ask for refactoring the inner loop of computePlacements to make local variables and SetJob(...) state easier to follow, but I'm not sure it'd help readability.

i didn't see @schmichael's review before finishing mine, i had some of the same concerns, which i address in my review.

Mahmood Ali and others added 4 commits August 25, 2020 17:22
To address review comments
`(alloc.DeploymentStatus == nil || !alloc.DeploymentStatus.IsCanary())`
and `!alloc.DeploymentStatus.IsCanary()` are equivalent.
Co-authored-by: Chris Baker <[email protected]>
@notnoop notnoop force-pushed the b-reschedule-job-versions branch from 26dc560 to f075bcc Compare August 25, 2020 21:37
@notnoop notnoop merged commit 1afd415 into master Aug 25, 2020
@notnoop notnoop deleted the b-reschedule-job-versions branch August 25, 2020 22:02
@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 20, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Replacement allocations use unpromoted job versions
3 participants