Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scheduler: seed random shuffle of nodes with eval ID #12008

Merged
merged 1 commit into from
Feb 8, 2022

Conversation

tgross
Copy link
Member

@tgross tgross commented Feb 4, 2022

Processing an evaluation is nearly a pure function over the state
snapshot, but we randomly shuffle the nodes. This means that
developers can't take a given state snapshot and pass an evaluation
through it and be guaranteed the same plan results.

But the evaluation ID is already random, so if we use this as the seed
for shuffling the nodes we can greatly reduce the sources of
non-determinism. Unfortunately golang map iteration uses a global
source of randomness and not a goroutine-local one, but arguably
if the scheduler behavior is impacted by this, that's a bug in the
iteration.


Reviewers: I've grabbed a random set of folks to look at this but maybe consider this mostly a "working proposal". I'm not certain there aren't side-effects I haven't considered here. But this could have been useful while debugging a large customer's incident recently.

@tgross tgross force-pushed the scheduler-deterministic-shuffle branch from 8304f36 to 602150c Compare February 4, 2022 14:55
@vercel vercel bot temporarily deployed to Preview – nomad February 4, 2022 14:55 Inactive
@tgross tgross changed the title scheduler: seed random shuffle nodes with eval ID scheduler: seed random shuffle of nodes with eval ID Feb 4, 2022
scheduler/util.go Outdated Show resolved Hide resolved
.changelog/12008.txt Outdated Show resolved Hide resolved
scheduler/util.go Outdated Show resolved Hide resolved
scheduler/util.go Outdated Show resolved Hide resolved
@schmichael
Copy link
Member

Oops, forgot to mention a test that asserts shuffleNodes is stable would be nice.

Copy link

@motobrowning motobrowning left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok

@tgross
Copy link
Member Author

tgross commented Feb 8, 2022

Oops, forgot to mention a test that asserts shuffleNodes is stable would be nice.

Good call.

scheduler/util.go Outdated Show resolved Hide resolved
Processing an evaluation is nearly a pure function over the state
snapshot, but we randomly shuffle the nodes. This means that
developers can't take a given state snapshot and pass an evaluation
through it and be guaranteed the same plan results.

But the evaluation ID is already random, so if we use this as the seed
for shuffling the nodes we can greatly reduce the sources of
non-determinism. Unfortunately golang map iteration uses a global
source of randomness and not a goroutine-local one, but arguably
if the scheduler behavior is impacted by this, that's a bug in the
iteration.
@tgross tgross force-pushed the scheduler-deterministic-shuffle branch from 20b3270 to 27102af Compare February 8, 2022 15:14
@@ -70,7 +70,8 @@ type GenericStack struct {

func (s *GenericStack) SetNodes(baseNodes []*structs.Node) {
// Shuffle base nodes
shuffleNodes(baseNodes)
idx, _ := s.ctx.State().LatestIndex()
Copy link
Member

@schmichael schmichael Feb 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't believe how many words I'm writing about we need a nonce 😅, however I think there are some subtle pros and cons to this approach:

tl;dr This allows writing reproducible tests against deterministic state stores which is the main improvement we're looking for! :shipit:

Pros:

  1. Subsequent scheduling attempts of the same eval will be shuffled differently! This accomplishes that goal perfectly. 🎉
  2. Test cases with deterministic state store updates will make shuffle nodes deterministically.

Cons:

  1. When trying to reproduce scheduling behavior from a user's snapshot, the LatestIndex() will be the snapshot's index. The snapshot was likely taken long (in Raft index terms (pun not intended)) after when the scheduling attempt we wish to reproduce was made. Obviously in cases like this the entire state snapshot is likely divergent from the case we wish to inspect, so we can only reduce non-determinism, not remove it. I'm unsure how much value there is in reducing non-determinism in this case.
  2. Workers only ever enforce the snapshots are >= a Raft index (Eval.WaitIndex initially, the RefreshIndex on subsequent attempts). Whether the index is == (reproducible shuffle!) or > (oh no) is entirely dependent on races between the Raft/fsm and scheduling subsystems. Even if the indexes are only off-by-1 the nodes will be shuffled completely differently.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that the primary use case for this is to make it so that we can have deterministic tests, maybe the right way to deal with this is to just have the index used for the shuffle get logged somewhere. That way we can feed that index to the shuffle function as part of tests?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that or the retry # option.

Although I don't think there's a rush as this should work for those tests where we either bypass raft and manually insert objects into the state store or we load a snapshot into an otherwise inactive server and perform some scheduling attempts.

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 29, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants