-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reapply #8644 #9242
base: master
Are you sure you want to change the base?
Reapply #8644 #9242
Conversation
Important Review skippedAuto reviews are limited to specific labels. 🏷️ Labels to auto review (1)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
6065746
to
27144ba
Compare
Waiting to push fix commit until CI completes for the reapplication. |
7a40c4a
to
1e7b192
Compare
Looks like there are still a couple of itests failing. Will keep working on this next week. |
The error message |
This looks relevant, re some of the errors I see in the latest CI run: https://stackoverflow.com/a/42303225 |
Perhaps part of the issue is with the
Based on the SO link above, we might also be lacking some needed indexes. |
With closing the channel and a couple of other tests, I'm seeing logs similar to:
when I reproduce locally, as well as in the CI logs. I'm going to pull on that thread first... On the test config side, also seeing these:
I think the first issue above is with the code, the second is a config issue, and the other config issue in my comment above are the three major failures still happening. I think the |
This looks like a case where we |
Yep, looking into why that isn't caught by the panic/recover mechanism. |
It was actually a lack of error checking in |
1e67a84
to
899ae59
Compare
Looks better as far as the errors on closing channels. Will keep working tomorrow to eliminate the other errors. |
Hmm, so we don't have great visibility into how much memory these CI machines have. Perhaps we need to modify the connection settings to reduce the number of active connections, and also tune params like @djkazic has been working on a postgres+lnd tuning/perf guide, that I think we can eventually check directly into lnd. |
This is also very funky: lnd/kvdb/sqlbase/readwrite_bucket.go Lines 336 to 363 in e3cc4d7
We do two queries to just delete: select to see if exists, then delete. Instead of just trying to delete. Stepping back a minute: perhaps the issue is with this flawed KV abstraction we have. Perhaps we should just re-create a better hierarchical KV table from scratch. We use |
Here's another instance of duplicated work in lnd/kvdb/sqlbase/readwrite_bucket.go Lines 149 to 187 in e3cc4d7
We select to see if it exists, then potentially do the insert again. Instead, we can just do an |
I think the way the sequence is implemented may also be problematic: we have the sequence field directly in the table, which means table locks may need to be held. The sequence gets incremented a lot for stuff like payments, or invoice. We may be able to instead split that out into another table that can be updated independently of the main table: lnd/kvdb/sqlbase/readwrite_bucket.go Lines 412 to 437 in e3cc4d7
|
I've been able to reduce (but not fully eliminate) the I've also tried treating these errors and In addition, I've found one more place where we get the I pushed these changes above for discussion. My next step is to try to reduce the number of conflicts based on @Roasbeef's suggestions above. I'm going on vacation for the rest of the week until next Tuesday, so will keep working on this then. |
I think treating the OOM errors as serialization errors ended up being a mistake. Going to take that out and push when this run is done. In addition, I'm trying doubling the |
Why does this inner function panic? Is this another instance where we aren't properly catching the error? Or is it that we actually have panics in |
Ah ok, I see now that the serialization error handling in general is based around recovering after panics to retry a transaction: Lines 168 to 198 in a101950
Lines 238 to 246 in a101950
I'm not sure why we went in that direction historically. At this kvdb emulation level, we can just pass through that error (not panic), then rely on the normal serialization error handling: Lines 268 to 284 in a101950
|
I'll be off tomorrow but I'll see if I can refactor this to avoid panics later this week. I think the biggest reason is that I can likely use a channel or mutex/error field in the tx struct to pass back errors instead of a panic in these cases, and then the deferred unlocks should be executed. Also we can sort of rely on using the "in a failed tx" errors I've started treating as serialization errors in case something after a |
This reverts commit 67419a7.
|
f269f04
to
c9dca65
Compare
Looks like we have a clean run Will check out the set of core commits now. I still think we can likely re structure the the queries and KV-table, but we can save that for another time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work with this PR! I can tell some serious tenacity went into iterating on this PR to get to the point it's at now.
batch/batch.go
Outdated
failIdx = i | ||
} | ||
|
||
return dbErr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still want to return the non-mapped version? For when IsSerializationError
is false.
@@ -2622,8 +2622,14 @@ func (c *ChannelGraph) delChannelEdgeUnsafe(edges, edgeIndex, chanIndex, | |||
|
|||
// As part of deleting the edge we also remove all disabled entries | |||
// from the edgePolicyDisabledIndex bucket. We do that for both directions. | |||
updateEdgePolicyDisabledIndex(edges, cid, false, false) | |||
updateEdgePolicyDisabledIndex(edges, cid, true, false) | |||
err = updateEdgePolicyDisabledIndex(edges, cid, false, false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
@@ -17,6 +17,11 @@ var ( | |||
// ErrRetriesExceeded is returned when a transaction is retried more | |||
// than the max allowed valued without a success. | |||
ErrRetriesExceeded = errors.New("db tx retries exceeded") | |||
|
|||
postgresErrMsgs = []string{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style nit: missing godoc
comment.
@@ -21,6 +21,7 @@ var ( | |||
postgresErrMsgs = []string{ | |||
"could not serialize access", | |||
"current transaction is aborted", | |||
"not enough elements in RWConflictPool", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, this gets returned when the instance runs out of shared memory. I wager this is popping up mainly due to the constrained environment that the CI runners execute on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right but I think we can retry this one specifically whereas the out of shared memory
error tends to be less retriable. That's why I'm not detecting the error code, but only the string.
@@ -38,7 +38,7 @@ const ( | |||
SqliteBackend = "sqlite" | |||
DefaultBatchCommitInterval = 500 * time.Millisecond | |||
|
|||
defaultPostgresMaxConnections = 50 | |||
defaultPostgresMaxConnections = 20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the other fixes later in this commit, if we revert this (back to 50) do things still pass?
If not, then we may want to implement clamping for an upper limit here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will check and see if I can tune to be OK for 50.
|
||
// Apply each request in the batch in its own transaction. Requests that | ||
// fail will be retried by the caller. | ||
for _, req := range b.reqs { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we should just bypass the batch scheduler all together for postgres?
IIRC, we added it originally to speed up initial graph sync for bbolt, by reducing the number of total transactions we did.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be able to take this commit out altogether, will check to see after I've fixed the last deadlock I'm working on now. Otherwise, I'll refactor to just skip the batch scheduler for postgres, should be simpler.
return catchPanic(func() error { return f(kvTx) }) | ||
|
||
err := f(kvTx) | ||
// Return the internal error first in case we need to retry and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style nit: missing a newline above.
@@ -1624,14 +1625,25 @@ func (s *UtxoSweeper) monitorFeeBumpResult(resultChan <-chan *BumpResult) { | |||
} | |||
|
|||
case <-s.quit: | |||
log.Debugf("Sweeper shutting down, exit fee " + | |||
"bump handler") | |||
log.Debugf("Sweeper shutting down, exit fee "+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Temp commit that can be dropped?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, was hoping to get this deadlock in CI, but the deadlock happened in another test that didn't produce this output.
I'm able to reproduce the deadlock and think I've figured out how it happens. Running some tests to ensure it's fixed, then if it stays good, will submit a small PR to btcwallet with the fix. I lied, still working on a fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll clean this up shortly, but responding to a few comments.
Note that the failure in CI from the previous push is the same deadlock we've seen before in htlc_timeout_resolver_extract_preimage_(remote|local)
but in a different test, so it didn't end up showing the goroutine dump. But I think I've tracked it down and will submit a PR to btcwallet to fix it. I think I have a way to track it down, but am still working on it. It's definitely in waddrmgr.
|
||
// Apply each request in the batch in its own transaction. Requests that | ||
// fail will be retried by the caller. | ||
for _, req := range b.reqs { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be able to take this commit out altogether, will check to see after I've fixed the last deadlock I'm working on now. Otherwise, I'll refactor to just skip the batch scheduler for postgres, should be simpler.
@@ -21,6 +21,7 @@ var ( | |||
postgresErrMsgs = []string{ | |||
"could not serialize access", | |||
"current transaction is aborted", | |||
"not enough elements in RWConflictPool", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right but I think we can retry this one specifically whereas the out of shared memory
error tends to be less retriable. That's why I'm not detecting the error code, but only the string.
@@ -1624,14 +1625,25 @@ func (s *UtxoSweeper) monitorFeeBumpResult(resultChan <-chan *BumpResult) { | |||
} | |||
|
|||
case <-s.quit: | |||
log.Debugf("Sweeper shutting down, exit fee " + | |||
"bump handler") | |||
log.Debugf("Sweeper shutting down, exit fee "+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, was hoping to get this deadlock in CI, but the deadlock happened in another test that didn't produce this output.
I'm able to reproduce the deadlock and think I've figured out how it happens. Running some tests to ensure it's fixed, then if it stays good, will submit a small PR to btcwallet with the fix. I lied, still working on a fix.
@@ -38,7 +38,7 @@ const ( | |||
SqliteBackend = "sqlite" | |||
DefaultBatchCommitInterval = 500 * time.Millisecond | |||
|
|||
defaultPostgresMaxConnections = 50 | |||
defaultPostgresMaxConnections = 20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will check and see if I can tune to be OK for 50.
Re the shared memory issue, I think we can get around that by bumping up the size of the CI instance we use for these postgres tests: https://docs.github.com/en/actions/using-github-hosted-runners/using-larger-runners/running-jobs-on-larger-runners |
I think I've found the deadlock. With more than one DB transaction allowed in parallel for btcwallet, we're running into a deadlock similar to the following. This example is from the UTXO sweeper tests, but it can happen in other situations as well. In one goroutine, the UTXO sweeper calls
So while the top-level In another goroutine, we see the sweeper call
So the sequence in this case is that the This has previously been mitigated by the fact that each of these happens inside a database transaction, which never ran in parallel. However, with parallel DB transactions made possible by this change, the inner deadlock is exposed. I'll submit a PR next week to btcwallet to fix this, and then clean up this PR/respond to the comments above. |
I've submitted btcsuite/btcwallet#967 to fix the deadlock mentioned above. |
Change Description
Fix #9229 by reapplying #8644 and
batch
packagebatch
requests into their own transactions for postgres db backend to reduce serialization errorschanneldb
packagecurrent transaction is aborted
errors as serialization errors in case we hit a serialization error and ignore it, and get this error in a subsequent call to postgresdb-instance
postgres flags inMakefile
per @djkazic's recommendationsmaxconnections
parameter for postgres DBs to 20 instead of 50 by defaultSteps to Test
See the failing itests prior to the fix, and the passing itests after the fix.
Pull Request Checklist
Testing
Code Style and Documentation
[skip ci]
in the commit message for small changes.📝 Please see our Contribution Guidelines for further guidance.