-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kvserver: prevent build-up of abandoned consistency checks #77432
Labels
A-kv-replication
Relating to Raft, consensus, and coordination.
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
Comments
erikgrinaker
added
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
A-kv-replication
Relating to Raft, consensus, and coordination.
T-kv-replication
labels
Mar 7, 2022
erikgrinaker
changed the title
kvserver: prevent build-up of consistency checks
kvserver: prevent build-up of abandoned consistency checks
Mar 7, 2022
craig bot
pushed a commit
that referenced
this issue
Mar 10, 2022
77433: kvserver: limit concurrent consistency checks r=tbg a=erikgrinaker Consistency checks run asynchronously below Raft on all replicas using a background context, but sharing a common per-store rate limit. It was possible for many concurrent checks to build up over time, which would reduce the rate available to each check, exacerbating the problem and preventing any of them from completing in a reasonable time. This patch adds two new limits for asynchronous consistency checks: * `consistencyCheckAsyncConcurrency`: limits the number of concurrent asynchronous consistency checks to 7 per store. Additional consistency checks will error. * `consistencyCheckAsyncTimeout`: sets an upper timeout of 1 hour for each consistency check, to prevent them from running forever. `CHECK_STATS` checks are exempt from the limit, as these are cheap and the DistSender will run these in parallel across ranges, e.g. via `crdb_internal.check_consistency()`. There are likely better solutions to this problem, but this is a simple backportable stopgap. Touches #77432. Resolves #77080. Release justification: fixes for high-priority or high-severity bugs in existing functionality. Release note (bug fix): Added a limit of 7 concurrent asynchronous consistency checks per store, with an upper timeout of 1 hour. This prevents abandoned consistency checks from building up in some circumstances, which could lead to increasing disk usage as they held onto Pebble snapshots. Co-authored-by: Erik Grinaker <[email protected]>
This was referenced Mar 10, 2022
This was referenced Jun 24, 2022
This was referenced Aug 22, 2022
craig bot
pushed a commit
that referenced
this issue
Sep 9, 2022
86883: kvserver: cancel consistency checks more reliably r=tbg a=pavelkalinnikov This PR increases chance of propagating cancelation signal to replicas to prevent them from running abandoned consistency check tasks. Specifically: - The computation is aborted if the collection request is canceled. - The computation is not started if the collection request gave up recently. - The initiator runs all requests in parallel to reduce asynchrony, and to be able to cancel all the requests explicitly, instead of skipping some of them. --- ### Background Consistency checks are initiated by `ComputeChecksum` command in the Raft log, and run until completion under a background context. The result is collected by the initiator via the `CollectChecksum` long poll. The task is synchronized with the collection handler via the map of `replicaChecksum` structs. Currently, the replica initiating the consistency check sends a collection request to itself first, and only then to other replicas in parallel. This results in substantial asynchrony on the receiving replica, between the request handler and the computation task. The current solution to that is keeping the checksum computation results in memory for `replicaChecksumGCInterval` to return them to late arriving requests. However, there is **no symmetry** here: if the computation starts late instead, it doesn't learn about a previously failed request. The reason why the initiator blocks on its local checksum first is that it computes the "master checksum", which is then added to all other requests. However, this field is only used by the receiving end to log an inconsistency error. The actual killing of this replica happens on the second phase of the protocol, after the initiating replica commits another Raft message with the `Terminate` field populated. So, there is **no strong reason to keep this blocking behaviour**. When the `CollectChecksum` handler exits due to a canceled context (for example, the request timed out, or the remote caller crashed), the background task continues to run. If it was not running, it may start in the future. In both cases, the consistency checks pool (which has a limited size and processing rate) spends resources on running dangling checks, and rejects useful ones. If the initiating replica fails to compute its local checksum, it does not send requests (or any indication to cancel) to other replicas. This is problematic because the checksum tasks will be run on all replicas, which opens the possibility for accumulating many such dangling checks. --- Part of #77432 Release justification: performance and stability improvement Release note(bug fix): A consistency check is now skipped/stopped when its remote initiator gives up on it. Previously such checks would still be attempted to run, and, due to the limited size of the worker pool, prevent the useful checks from running. In addition, consistency check requests are now sent in parallel, and cancelation signal propagates more reliably. Co-authored-by: Pavel Kalinnikov <[email protected]>
craig bot
pushed a commit
that referenced
this issue
Sep 21, 2022
88268: kvserver: make consistency check symmetrical r=erikgrinaker a=pavelkalinnikov Previously, a consistency check would fail if the local replica failed to compute the checksum. This makes the consistency checks not resilient to issues with the leader. The only place depending on this assertion is the MVCC stats delta sub-check of the consistency check. It is not strictly required to take delta from the local replica for this purpose, any successful response from other replicas can be used too. This commit makes it so that any successful checksum computation can be examined. As a result, all replicas are treated symmetrically, so the consistency check is more resilient to leader issues. Touches #77432 Release note: None Co-authored-by: Pavel Kalinnikov <[email protected]>
craig bot
pushed a commit
that referenced
this issue
Sep 28, 2022
88902: kvserver: remove checksum computation tasks GC r=erikgrinaker a=pavelkalinnikov GC is no longer needed because the task is synchronized with the collection request, and both wait only up to 5s before giving up. Both are async, and normally the task runs O(tens of second) anyway, so it's ok if the failing path takes 5s. This change makes the `replicaChecksum` lifecycle code significantly simpler. Touches #77432 Release note: None Co-authored-by: Pavel Kalinnikov <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
A-kv-replication
Relating to Raft, consensus, and coordination.
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
Consistency checks currently run asynchronously with a background context on all range replicas. If the caller gives up on these (e.g. because one of them errors), we don't cancel the others. As the caller goes on to run further consistency checks, this can cause a build-up of abandoned consistency checks. We should fix this such that these checks are cancelled when no longer needed.
See draft PR in #76855.
Jira issue: CRDB-13588
The text was updated successfully, but these errors were encountered: