-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(server): wrap read-modify-write apis with distributed lock #5979
Conversation
This stack of pull requests is managed by Graphite. Learn more about stacking. Join @darkskygit and the rest of your teammates on Graphite |
☁️ Nx Cloud ReportCI is running/has finished running commands for commit 01b39ba. As they complete they will appear below. Click to see the status, the terminal output, and the build insights. 📂 See all runs for this CI Pipeline Execution ✅ Successfully ran 6 targets
Sent with 💌 from NxCloud. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## canary #5979 +/- ##
==========================================
+ Coverage 61.02% 61.32% +0.29%
==========================================
Files 488 492 +4
Lines 22115 22447 +332
Branches 1944 1959 +15
==========================================
+ Hits 13496 13765 +269
- Misses 8390 8453 +63
Partials 229 229
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
this is a bad idea, make all the db queries must be aware of query engine. |
currently prisma only support get a transaction by Prisma.$transaction but i just found a prisma transaction plugin for nestjs, what are your thoughts on this usage? |
what about a resource lock for mutating data, for example: function invite(workspaceId, userId) {
// this is good because it tell us there may be concurrency issue if the resource not locked
// we can safely loop query the lock in a given time limit like 100ms(is 100ms enough for any table mutating?), and block the lock until the last use get released.
// if the time limit passes by, force rewrite the lock and go ahead
const lock = await this.mutex.lock('members:' + workspaceId)
try {
await addUserToWorkspace(workspaceId, userId)
} finally {
lock.release()
}
// ...or
await this.mutex.lock('members:' + workspaceId, async () => {
// ^ maybe strong typed so we won't get lost if different callers request for a same lock
await addUserToWorkspace(workspaceId, userId)
})
} |
if we use mutex, then it needs to be a distributed lock but for single table read-modify-write(e.g.
|
b2d7731
to
e5a6676
Compare
7726607
to
256a053
Compare
256a053
to
858bf26
Compare
packages/backend/server/src/core/workspaces/resolvers/workspace.ts
Outdated
Show resolved
Hide resolved
858bf26
to
df14857
Compare
f0c3aed
to
5c5208b
Compare
consider that the auth module is refacting, restack the throttler refactor branch and develop it based on the auth branch but not this branch |
How about expose await using _ = await this.mutex.lock(`key`); |
looks more elegant than the existing solution, I will add support for this usage |
80131f3
to
ac3a161
Compare
impl new distributed lock with
using
syntax: