-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jobs: job adoption can block on intents #62734
Labels
C-enhancement
Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)
no-issue-activity
X-stale
Comments
ajwerner
added
the
C-enhancement
Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)
label
Mar 29, 2021
craig bot
pushed a commit
that referenced
this issue
Jun 30, 2022
79134: kv: support FOR {UPDATE,SHARE} SKIP LOCKED r=arulajmani a=nvanbenschoten KV portion of #40476. Assists #62734. Assists #72407. Assists #78564. **NOTE: the SQL changes here were extracted from this PR and moved to #83627. This allows us to land the KV portion of this change without exposing it yet.** ```sql CREATE TABLE kv (k INT PRIMARY KEY, v INT) INSERT INTO kv VALUES (1, 1), (2, 2), (3, 3) -- in session 1 BEGIN; UPDATE kv SET v = 0 WHERE k = 1 RETURNING * k | v ----+---- 1 | 0 -- in session 2 BEGIN; SELECT * FROM kv ORDER BY k LIMIT 1 FOR UPDATE SKIP LOCKED k | v ----+---- 2 | 2 -- in session 3 BEGIN; SELECT * FROM kv FOR UPDATE SKIP LOCKED k | v ----+---- 3 | 3 ``` These semantics closely match those of FOR {UPDATE,SHARE} SKIP LOCKED in PostgreSQL. With SKIP LOCKED, any selected rows that cannot be immediately locked are skipped. Skipping locked rows provides an inconsistent view of the data, so this is not suitable for general purpose work, but can be used to avoid lock contention with multiple consumers accessing a queue-like table. [Here](https://www.pgcasts.com/episodes/the-skip-locked-feature-in-postgres-9-5) is a short video that explains why users might want to use SKIP LOCKED in Postgres. The same motivation applies to CockroachDB. However, SKIP LOCKED is not a complete solution to queues, as MVCC garbage will still become a major problem with sufficiently high consumer throughput. Even with a very low gc.ttl, CockroachDB does not garbage collect MVCC garbage fast enough to avoid slowing down consumers that scan from the head of a queue over MVCC tombstones of previously consumed queue entries. ---- ### Implementation Skip locked has a number of touchpoints in Storage and KV. To understand these, we first need to understand the isolation model of skip-locked. When a request is using a SkipLocked wait policy, it behaves as if run at a weaker isolation level for any keys that it skips over. If the read request does not return a key, it does not make a claim about whether that key does or does not exist or what the key's value was at the read's MVCC timestamp. Instead, it only makes a claim about the set of keys that are returned. For those keys which were not skipped and were returned (and often locked, if combined with a locking strength, though this is not required), serializable isolation is enforced. When the `pebbleMVCCScanner` is configured with the skipLocked option, it does not include locked keys in the result set. To support this, the MVCC layer needs to be provided access to the in-memory lock table, so that it can determine whether keys are locked with unreplicated lock. Replicated locks are represented as intents, which will be skipped over in getAndAdvance. Requests using the SkipLocked wait policy acquire the same latches as before and wait on all latches ahead of them in line. However, if a request is using a SkipLocked wait policy, we always perform optimistic evaluation. In Replica.collectSpansRead, SkipLocked reads are able to constrain their read spans down to point reads on just those keys that were returned and were not already locked. This means that there is a good chance that some or all of the write latches that the SkipLocked read would have blocked on won't overlap with the keys that the request ends up returning, so they won't conflict when checking for optimistic conflicts. Skip locked requests do not scan the lock table when initially sequencing. Instead, they capture a snapshot of the in-memory lock table while sequencing and scan the lock table as they perform their MVCC scan using the btree snapshot stored in the concurrency guard. MVCC was taught about skip locked in the previous commit. Skip locked requests add point reads for each of the keys returned to the timestamp cache, instead of adding a single ranged read. This satisfies the weaker isolation level of skip locked. Because the issuing transaction is not intending to enforce serializable isolation across keys that were skipped by its request, it does not need to prevent writes below its read timestamp to keys that were skipped. Similarly, Skip locked requests only records refresh spans for the individual keys returned, instead of recording a refresh span across the entire read span. Because the issuing transaction is not intending to enforce serializable isolation across keys that were skipped by its request, it does not need to validate that they have not changed if the transaction ever needs to refresh. ---- ### Benchmarking I haven't done any serious benchmarking with this SKIP LOCKED yet, though I'd like to. At some point, I would like to build a simple queue-like workload into the `workload` tool and experiment with various consumer access patterns (non-locking reads, locking reads, skip-locked reads), indexing schemes, concurrency levels (for producers and consumers), and batch sizes. 82915: sql: add locality to system.sql_instances table r=rharding6373 a=rharding6373 This PR adds the column `locality` to the `system.sql_instances` table that contains the locality (e.g., region) of a SQL instance. The encoded locality is a string representing the `roachpb.Locality` that may have been provided when the instance was created. This change also pipes the locality through `InstanceInfo`. This will allow us to determine and use locality information of other SQL instances, e.g. in DistSQL for multi-tenant locality-awareness distribution planning. Informs: #80678 Release note (sql change): Table `system.sql_instances` has a new column, `locality`, that stores the locality of a SQL instance if it was provided when the instance was started. This exposes a SQL instance's locality to other instances in the cluster for query planning. 83418: loopvarcapture: do not flag `defer` within local closure r=srosenberg,dhartunian a=renatolabs Previously, handling of `defer` statements in the `loopvarcapture` linter was naive: whenever a `defer` statement in the body of a loop referenced a loop variable, the linter would flag it as an invalid reference. However, that can be overly restrictive, as a relatively common idiom is to create literal functions and immediately call them so as to take advantage of `defer` semantics, as in the example below: ```go for _, n := range numbers { // ... func() { // ... defer func() { doSomewithing(n) }() // always safe // ... }() } ``` The above reference is valid because it is guaranteed to be called with the correct value for the loop variable. A similar scenario occurs when a closure is assigned to a local variable for use within the loop: ```go for _, n := range numbers { // ... helper := func() { // ... defer func() { doSomething(n) }() // ... } // ... helper() // always safe } ``` In the snippet above, calling the `helper` function is also always safe because the `defer` statement is scoped to the closure containing it. However, it is still *not* safe to call the helper function within a Go routine. This commit updates the `loopvarcapture` linter to recognize when a `defer` statement is safe because it is contained in a local closure. The two cases illustrated above will no longer be flagged, allowing for that idiom to be used freely. Release note: None. 83545: sql/schemachanger: move end to end testing to one test per-file r=fqazi a=fqazi Previously, we allowed multiple tests per-file for end-to-end testing inside the declarative schema changer. This was inadequate because we plan on extending the end-to-end testing to start injecting additional read/write operations at different stages, which would make it difficult. To address this, this patch will split tests into individual files, with one test per file. Additionally, it extends support to allow multiple statements per-test statement, for transaction support testing (this is currently unused). Release note: None Co-authored-by: Nathan VanBenschoten <[email protected]> Co-authored-by: rharding6373 <[email protected]> Co-authored-by: Renato Costa <[email protected]> Co-authored-by: Faizan Qazi <[email protected]>
We have marked this issue as stale because it has been inactive for |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
C-enhancement
Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)
no-issue-activity
X-stale
Is your feature request related to a problem? Please describe.
Currently job adoption is handled by a reconciliation loop whereby the set of runnable jobs are claimed and then the claimed jobs are run. These loops end up scanning various indexes of the jobs table and can encounter intents. This can be problematic in that a long-running transaction which creates a job may hold off adoption of new jobs.
Describe the solution you'd like
For jobs which are intended to be run on the gateway, which is most schema change jobs and most jobs which are intended to be run semi-synchronously, we could utilize a rangefeed as a source of information about claimed jobs. Such a thing would be easy to integrate; it could legitimately just launch a goroutine to call
(*jobs.Registry).resumeJob()
.Describe alternatives you've considered
If cockroach supported
SKIP LOCKED
or some mechanism to scan below intents, that may work. However, such a thing seems hard to implement in the context of serializable isolation. See #40476 for more discussion.Jira issue: CRDB-2639
The text was updated successfully, but these errors were encountered: