Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

txn: record resolving locks #473

Merged
merged 10 commits into from
May 17, 2022
Merged

Conversation

longfangsong
Copy link
Member

No description provided.

Signed-off-by: longfangsong <[email protected]>
@longfangsong longfangsong force-pushed the lock_wait_resolving branch 3 times, most recently from 78592fb to 64d6e44 Compare April 21, 2022 17:14
Signed-off-by: longfangsong <[email protected]>
@longfangsong longfangsong marked this pull request as ready for review April 22, 2022 01:39
Signed-off-by: longfangsong <[email protected]>
@longfangsong longfangsong force-pushed the lock_wait_resolving branch from c5a3056 to b761265 Compare May 12, 2022 19:56
@@ -365,6 +365,7 @@ func (action actionPrewrite) handleSingleBatch(c *twoPhaseCommitter, bo *retry.B
}
start := time.Now()
msBeforeExpired, err := c.store.GetLockResolver().ResolveLocks(bo, c.startTS, locks)
defer c.store.GetLockResolver().ResolveLocksDone(c.startTS)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the defer written in prewrite instead of in ResolveLocks? ResolveLocksDone looks missing if resolve lock is triggered by other commands.

Copy link
Member Author

@longfangsong longfangsong May 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the defer written in prewrite instead of in ResolveLocks?

It's because we need to keep the resolving infomation during backoffing. From the user's POV, we are doing the lock resolving work during backoffing.

ResolveLocksDone looks missing if resolve lock is triggered by other commands.

I've added them now.

@longfangsong longfangsong requested a review from sticnarf May 16, 2022 01:28
@longfangsong longfangsong force-pushed the lock_wait_resolving branch from 7d6ef2d to 49ca14d Compare May 16, 2022 02:43
@longfangsong
Copy link
Member Author

/cc @cfzjywxk @MyonKeminta

@@ -227,6 +227,7 @@ func (action actionPessimisticLock) handleSingleBatch(c *twoPhaseCommitter, bo *
// tikv default will wait 3s(also the maximum wait value) when lock error occurs
startTime = time.Now()
msBeforeTxnExpired, err := c.store.GetLockResolver().ResolveLocks(bo, 0, locks)
defer c.store.GetLockResolver().ResolveLocksDone(c.startTS)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's in a retry loop here. It seems we'd better protect it from being deferred more than once or do the call before entering the next loop? The same question to the prewrite part.

return lr.resolveLocks(bo, callerStartTS, locks, true, lite)
}

func (lr *LockResolver) saveResolvingLocks(locks []*Lock, callerStartTS uint64) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if ResolveLocks happens concurrently for multiple regions?

For example, a batch get involves multiple regions and it encounters locks in more than one regions. But the resolving array always only includes the information of one region.

@MyonKeminta
Copy link
Contributor

In this implementation, it requires all callers to ResolveLocks/ResolveLocksForRead to call ResolveLocksDone when they finish. Besides the usages within client-go, there are also some invocations in tidb, br, etc. I'm afraid about that. If we cannot put saving and removing logic both within ResolveLocks, I suggest making saveResolvingLocks a public method, and only who called SaveResolvingLocks explicitly need to call ResolveLocksDone.

Co-authored-by: MyonKeminta <[email protected]>
Signed-off-by: longfangsong <[email protected]>
@longfangsong longfangsong force-pushed the lock_wait_resolving branch 2 times, most recently from 50c4871 to 4dc3ac0 Compare May 16, 2022 09:21
@sticnarf
Copy link
Collaborator

Looks working but it seems to create more complexity to the code than I expected, though I don't have a good idea yet...

Signed-off-by: longfangsong <[email protected]>
@longfangsong longfangsong force-pushed the lock_wait_resolving branch from 4dc3ac0 to 35484dd Compare May 16, 2022 09:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants