-
Notifications
You must be signed in to change notification settings - Fork 844
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce lock contention on transaction pool when building a block #7180
Reduce lock contention on transaction pool when building a block #7180
Conversation
a494a9f
to
2c05e6f
Compare
Signed-off-by: Fabio Di Fabio <[email protected]>
2c05e6f
to
33f1aa1
Compare
Signed-off-by: Fabio Di Fabio <[email protected]>
...src/main/java/org/hyperledger/besu/ethereum/eth/transactions/layered/SparseTransactions.java
Outdated
Show resolved
Hide resolved
candidateTxsBySender = prioritizedTransactions.getBySender(); | ||
} | ||
|
||
selection: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GOTO coding is generaly discouraged as it is hard to read, can create spaghetti code and difficult to debug. I would suggest to review the code with a while loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I generally agree with you, but in this specific case, probably this version is better, the code fragment is short, I also tried to choose a label that make sense when read: break selection
this is my version with the while, it is more verbose, wdyt?
boolean continueSelection = true;
final var itCandidateTxsBySender = candidateTxsBySender.iterator();
while(itCandidateTxsBySender.hasNext() && continueSelection) {
final var senderTxs = itCandidateTxsBySender.next();
LOG.trace("highPrioSenderTxs {}", senderTxs);
boolean continueWithSender = true;
final var itSenderTxs = senderTxs.pendingTransactions().iterator();
while (itSenderTxs.hasNext() && continueWithSender) {
final var candidatePendingTx = itSenderTxs.next();
final var selectionResult = selector.evaluateTransaction(candidatePendingTx);
LOG.atTrace()
.setMessage("Selection result {} for transaction {}")
.addArgument(selectionResult)
.addArgument(candidatePendingTx::toTraceLog)
.log();
if (selectionResult.discard()) {
invalidTransactions.add(candidatePendingTx);
logDiscardedTransaction(candidatePendingTx, selectionResult);
}
if (selectionResult.stop()) {
LOG.trace("Stopping selection");
continueSelection = false;
continueWithSender = false;
} else {
if (!selectionResult.selected()) {
// avoid processing other txs from this sender if this one is skipped
// since the following will not be selected due to the nonce gap
LOG.trace("Skipping remaining txs for sender {}", candidatePendingTx.getSender());
continueWithSender = false;
}
}
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm primarily concerned about setting a precedent. While I still believe we should avoid using Goto expressions and prefer the while method, I don't consider this a blocker for merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, then will prefer to keep it as is for now, but available to review future suggestions.
.../src/main/java/org/hyperledger/besu/ethereum/eth/transactions/layered/ReadyTransactions.java
Outdated
Show resolved
Hide resolved
# Conflicts: # CHANGELOG.md
Signed-off-by: Fabio Di Fabio <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: Fabio Di Fabio <[email protected]>
PR description
Currently when Besu builds a block it takes and keep the lock on txpool, until the selection of txs to include is over (and this could be longer of the block creation timeout, since the tx that is processing across the timeout need to complete before releasing the lock) and that could delay the start of the next block building in case of small networks where single nodes could build many blocks in a row, for example L2 sequencers, but the optimization also benefit normal use cases, since incoming txs can be processed and added to the txpool concurrently to a block creation task.
Below there are 2 profiling taken, from a sequencer continuously building blocks, before and after applying this PR, selected is the layered txpool, and you can see how the
Total Blocked Time went down from >10min to millis.
Before
After
Fixed Issue(s)
Thanks for sending a pull request! Have you done the following?
doc-change-required
label to this PR if updates are required.Locally, you can run these tests to catch failures early:
./gradlew build
./gradlew acceptanceTest
./gradlew integrationTest
./gradlew ethereum:referenceTests:referenceTests