-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kvserver: prevent build-up of abandoned consistency checks #76855
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Please don't review yet (not sure if requesting reviews on a draft sends out notifications; I assume it does) |
It does. Will hold off. |
Purely mechanical. Release note: None
Adding a comment while I'm there. Release note: None
Release note: None
Release note: None
We've seen in the events leading up to cockroachdb#75448 that a build-up of consistency check computations on a node can severely impact node performance. This commit attempts to address the main source of that, while re-working the code for easier maintainability. The way the consistency checker works is by replicating a command through Raft that, on each Replica, triggers an async checksum computations the results of which the caller collects via `CollectChecksum` requests addressed to each `Replica`. If for any reason, the caller does *not* wait to collect the checksums but instead moves on to run another consistency check (perhaps on another Range), these inflight computations can build up over time. This was the main issue in cockroachdb#75448; we were accidentally canceling the context on the leaseholder "right away", failing the consistency check (but leaving it running on all other replicas), and moving on to the next Range. As a result, some (but with spread out leaseholders, ultimately all) Replicas ended up with dozens of consistency check computations, starving I/O and CPU. We "addressed" this by avoiding this errant ctx cancellation (cockroachdb#75448 but longer-term cockroachdb#75656), but this isn't a holistic fix yet. In this commit, we make three main changes: - give the inflight consistency check computations a clean API, which makes it much easier to understand "how it works". - when returning from CollectChecksum (either on success or error, notably including context cancellation), cancel the corresponding consistency check. This solves the problem, *assuming* that CollectChecksum is reliably issued to each Replica. - reliably issue CollectChecksum to each Replica on which a computation may have been triggered. When the caller's context is canceled, still do the call with a one-second-timeout one-off Context which should be good enough to make it to the Replica and short-circuit the call. Release note: None
tbg
force-pushed
the
conscheck-rewrite
branch
from
February 22, 2022 10:52
65f8690
to
322dd14
Compare
Properly fixed in #86883 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
We've seen in the events leading up to #75448 that a build-up of
consistency check computations on a node can severely impact node
performance. This commit attempts to address the main source of
that, while re-working the code for easier maintainability.
The way the consistency checker works is by replicating a command through
Raft that, on each Replica, triggers an async checksum computations
the results of which the caller collects via
CollectChecksum
requestsaddressed to each
Replica
.If for any reason, the caller does not wait to collect the checksums
but instead moves on to run another consistency check (perhaps on
another Range), these inflight computations can build up over time.
This was the main issue in #75448; we were accidentally canceling the
context on the leaseholder "right away", failing the consistency check
(but leaving it running on all other replicas), and moving on to the
next Range.
As a result, some (but with spread out leaseholders, ultimately all)
Replicas ended up with dozens of consistency check computations,
starving I/O and CPU. We "addressed" this by avoiding this errant ctx
cancellation (#75448 but longer-term #75656), but this isn't a holistic
fix yet.
In this commit, we make three main changes:
makes it much easier to understand "how it works".
notably including context cancellation), cancel the corresponding
consistency check. This solves the problem, assuming that
CollectChecksum is reliably issued to each Replica.
may have been triggered. When the caller's context is canceled, still
do the call with a one-second-timeout one-off Context which should be
good enough to make it to the Replica and short-circuit the call.
Release note: None