Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multiregion: all regions start in running if no max_parallel #8209

Merged
merged 2 commits into from
Jun 19, 2020

Conversation

tgross
Copy link
Member

@tgross tgross commented Jun 19, 2020

If max_parallel is not set, all regions should begin in a running state
rather than a pending state. Otherwise the first region is set to running
and then all the remaining regions once it enters blocked. That behavior is
technically correct in that we have at most max_parallel regions running,
but definitely not what a user expects.

If `max_parallel` is not set, all regions should begin in a `running` state
rather than a `pending` state. Otherwise the first region is set to `running`
and then all the remaining regions once it enters `blocked. That behavior is
technically correct in that we have at most `max_parallel` regions running,
but definitely not what a user expects.
@@ -198,7 +198,6 @@ func (a *allocReconciler) Compute() *reconcileResults {
// Detect if the deployment is paused
if a.deployment != nil {
a.deploymentPaused = a.deployment.Status == structs.DeploymentStatusPaused
//|| a.deployment.Status == structs.DeploymentStatusPending
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😊

@tgross tgross requested review from drewbailey, notnoop and cgbaker June 19, 2020 13:55
Copy link
Contributor

@notnoop notnoop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense to me and should go into beta - but one question.

// region starts in the running state
if a.job.IsMultiregion() &&
a.job.Multiregion.Strategy != nil &&
a.job.Multiregion.Strategy.MaxParallel != 0 &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I missed some of the changes of PRs - do we also want to check MaxParallel against current region index (e.g. second region with MaxParallel=2); is that handled elsewhere?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh that's a good catch. That's the same sort of thing -- we'd safely have at most max_parallel going, but the operator probably expects us to start with max_parallel. Will fix.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic here is getting gnarly though so I'm going to pull it out into a function.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've pulled this logic out to a function on Job in f10cc93

Copy link
Contributor

@cgbaker cgbaker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

@tgross tgross merged commit 5a068e6 into master Jun 19, 2020
@tgross tgross deleted the b-multiregion-maxparallel-unset branch June 19, 2020 15:17
tgross added a commit that referenced this pull request Jun 19, 2020
In #8209 we fixed the max_parallel stanza for multiregion by introducing the
IsMultiregionStarter check, but didn't apply it to the earlier place its
required. The result is that deployments start but don't place allocations.
tgross added a commit that referenced this pull request Jun 19, 2020
In #8209 we fixed the `max_parallel` stanza for multiregion by introducing the
`IsMultiregionStarter` check, but didn't apply it to the earlier place its
required. The result is that deployments start but don't place allocations.
@tgross tgross added this to the 0.12.0 milestone Jun 25, 2020
@github-actions
Copy link

github-actions bot commented Jan 1, 2023

I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 1, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants