-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: ABCI 1.0 baseapp integration #13453
Conversation
…k into kocubinski/mempool-impl
* feat(mempool): tests and benchmark for mempool * added base for select benchmark * added simple select mempool test * added select mempool test * insert 100 and 1000 * t * t * t * move mempool to its own package and fix a small nil pointer error * Minor bug fixes for the sender pool; * minor fixes to consume * test * t * t * t * added gen tx with order * t
…k into kocubinski/mempool-impl
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! I have no major changes to recommend. There were a few minor nit comments.
Only outstanding item seems to be around the semantics of DefaultProcessProposal
and how to handle invalid txs.
baseapp/baseapp.go
Outdated
return err | ||
_, _, _, _, err = app.runTx(runTxProcessProposal, txBytes) | ||
if err != nil { | ||
err = app.mempool.Remove(tx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh right, so when a honest process receives a proposal and executes it, any txs that are deemed invalid should still be included in the proposal. I think this is what @sergio-mena is alluding to. In other words, we should not reject or remove the tx from the mempool.
Co-authored-by: Aleksandr Bezobchuk <[email protected]>
Co-authored-by: Aleksandr Bezobchuk <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Giving an ACK here. But we do need to undo the changes to x/distribution/types/codec.go
Co-authored-by: Aleksandr Bezobchuk <[email protected]>
// | ||
// If any transaction fails to pass either condition, the proposal is rejected. | ||
func (app *BaseApp) DefaultProcessProposal() sdk.ProcessProposalHandler { | ||
return func(ctx sdk.Context, req abci.RequestProcessProposal) abci.ResponseProcessProposal { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Food for thought: As an initial implementation for ABCI 1.0 why not just always return abci.ResponseProcessProposal_ACCEPT
?
Some benefits:
- It gives applications the exact same behavior that exists in the SDK today, therefore reduces potential for any liveness risks or bugs.
- It's simpler and easier for you to test.
- It's more performant.
My understanding is that the introduction of ProcessProposal
was not intended specifically to solve some existing consensus issue on Cosmos SDK chains, rather it was introduced to give application developers flexibility (like us at dYdX) to reject proposals. So even if your default implementation always returns abci.ResponseProcessProposal_ACCEPT
, you are still fulfilling the promise of ProcessProposal
by allowing application developers to optionally implement their own logic via SetProcessProposal
.
If in the future if there emerges a strong case to return REJECT
in DefaultProcessProposal
(i.e. if you want to validate that the transactions in a block do not exceed MaxGas
, or something like that), then you could easily introduce such changes iteratively in a future version of the SDK.
Running every transaction through CheckTx
seems a bit extreme to me and it's not clear that it solves any specific problem that exists in any existing chain today, however I could be ignorant or wrong about this. It also just actively makes upgrades more risky. For example, is it possible that some existing chain could have added some custom AnteHandler which doesn't necessarily expect to be invoked during ProcessProposal
? Is this something that developers upgrading to v0.47
be aware of and audit in their codebases when upgrading?
In any case, dYdX will be using SetProcessProposal
so the default behavior is of no concern to us. This is merely my own personal opinion on a more iterative upgrade path for the SDK that reduces some of the surface area while still retaining the most important feature (the ability for developers to customize ProcessProposal
via SetProcessProposal
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @prettymuchbryce you make some good points. I've captured them in #13810 for further review and implementation. If we decide it's something we want @JeancarloBarrios or I can knock it out pretty quickly. At this moment to change direction would mean a thoughtful rewrite. Perhaps it's warranted but this PR was opened Oct 5 so we feel this iteration is good enough for an MVP. Let's continue the discussion! 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense. Thanks for humoring me! 😄
|
||
// NOTE: runTx was already run in CheckTx which calls mempool.Insert so ideally everything in the pool | ||
// should be valid. But some mempool implementations may insert invalid txs, so we check again. | ||
_, _, _, _, err = app.runTx(runTxPrepareProposal, bz) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't necessarily understand the point of forcing the runTx
behavior on all app-side mempool implementations. If anything, this just makes the SetMempool
API more restrictive. It also forces the performance burden on all custom app-side mempools of running all transactions through the Antehandlers a third time (CheckTx
, DeliverTx
, and now PrepareProposal
). As stated in the comment here this shouldn't be necessary unless the application developer's custom app-side mempool implementation has a bug or is trying to do something unexpected. They can already break themselves in other ways, for example a developer could add some panic
to their Select
implementation which means no proposal would ever be sent, right?
Just a suggestion and not a dealbreaker, but I think I'd prefer to be able to opt out of this somehow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- AnteHandlers add no performance burden -- that's the entire point of them. Lightweight checks prior to tx execution.
- We add validity checks here stated per the ADR. In addition, it's possible for an app to include/inject txs on
Insert
duringCheckTx
so this ensures any injected txs are still valid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree that they add no performance burden. Some of the checks read from state which could potentially mean disk reads if the state is not in cache. There is also signature verification which is not free.
A couple of examples of reading from state during Antehandlers:
- Reading the Account in
ConsumeTxSizeGasDecorator
here. - Calling
SendCoinsFromAccountToModule
inDeductFees
which reads module accounts from state here.
There is also signature verifications here which looks to take ~0.15ms per-op for secp256k1
if I am understanding the benchmarks correctly (Sig/secp256k1-8
). This means while ABCI is globally locked during PrepareProposal
, you're doing this operation for each transaction serially. So if the block has 100 transactions on average, you're spending ~15ms on average per-block blocking the entire application and verifying transaction signatures that have already been verified.
Let me know if I'm misunderstanding something!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree that they add no performance burden. Some of the checks read from state which could potentially mean disk reads if the state is not in cache. There is also signature verification which is not free.
True, state is read from but not written to. In the grand scheme of things, the performance overhead is near negligible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, state is read from but not written to. In the grand scheme of things, the performance overhead is near negligible.
Just to be super clear, 15 milliseconds of additional global lock time per-block (as in my above signature example) is definitely not negligible for us, especially given this is totally unnecessary computation in our case. It's very likely we're a unique case given our throughput aspirations so therefore your assessment is probably true for most other chains.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree with @prettymuchbryce, on profiles I have run on live networks ante handlers show up quite often and do add a non negligible amount of overhead. For many chains this isn't a problem but it's not something we should over look as we are trying to make the sdk more performant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, we validate the txs (i.e. AnteHandler) since applications can insert txs on Insert
during CheckTx
and we need to ensure these are still valid when selecting during PrepareProposal
.
Also, recall:
- The application defines the AnteHandler chain
- The application and thus it's AnteHandler is aware of tx execution context (i.e.
PrepareProposal
)
So if this doesn't suite your needs, your AnteHandler chain can easily do something like "If PrepareProposal, skip XYZ".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The application defines the AnteHandler chain
So if this doesn't suite your needs, your AnteHandler chain can easily do something like "If PrepareProposal, skip XYZ".
This is a really good point that I had forgotten. I think this would be reasonable solution.
} else if byteCount += txSize; byteCount <= req.MaxTxBytes { | ||
txsBytes = append(txsBytes, bz) | ||
} else { | ||
break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Food for thought:
This else
block seems to suggest that the txs queued up in the iterator won't be part of the returned response once the MaxTxBytes
has been reached. Not processing all the queued up txs is not desirable in certain app scenarios.
For example, let's say Select
returns txs [A1, A2, B1, B2] but PrepareProposal
only processes [A1, A2] and leaves out [B1, B2], because the MaxTxBytes
has been reached. Select
returned the txs in the above order, because the app has an invariant that txs of type A must appear before txs of type B, but from the app's perspective, txs of type B are much more important to include in the block than type A. So in this case, the invariant is respected, but the app doesn't get to include txs of type B and is not alerted about this behavior.
Thinking out loud, to prevent this issue, the app can validate that the txs returned in Select
is under the MaxTxBytes
cap. However, the app doesn't get to learn if all the txs returned in Select
were processed or not (it speculates that the txs were processed by PrepareProposal
, but there's no explicit signal).
Thinking out loud, some suggestions:
- Hint
MaxTxBytes
as part ofSelect
- Add a way for the app to learn that not all txs returned in
Select
were processed byPrepareProposal
and retrySelect
/PrepareProposal
- Add a mode where "process all txs returned by
Select
or fail"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason we don't pass MaxBytes to Select
is that we need to ensure that selected txs are valid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this further offline. #13831 fixes the concern.
@@ -507,16 +550,25 @@ func validateBasicTxMsgs(msgs []sdk.Msg) error { | |||
// Returns the application's deliverState if app is in runTxModeDeliver, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should this comment get updated?
return app.prepareProposalState | ||
case runTxProcessProposal: | ||
return app.processProposalState | ||
default: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should these cases explicitly switch on each of the mode values and have the default panic
to catch any unhandled modes?
if err != nil { | ||
return gInfo, nil, anteEvents, priority, err | ||
} | ||
} else if mode == runTxModeDeliver { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for my own understanding, why is it okay to not handle other types of runTxMode
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We remove during DeliverTx
(stated in ADR).
// mempool, up to maxBytes or until the mempool is empty. The application can | ||
// decide to return transactions from its own mempool, from the incoming | ||
// txs, or some combination of both. | ||
Select(txs [][]byte, maxBytes int64) ([]Tx, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to the above comment about MaxTxBytes
, I see that the previous impl hints the max bytes. Why is this removed? Was it because: "with iterators, PrepareProposal
can handle the cutoff point"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly right.
Description
Ref: #12272
Summary of changes
mempool.Tx
interface and notion of transaction size. Byte counting logic pulled up to baseapp.mempool.Select
to use iterator semantics instead of enumerating the mempool all at once.Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.
I have...
!
to the type prefix if API or client breaking changeCHANGELOG.md
Reviewers Checklist
All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.
I have...
!
in the type prefix if API or client breaking change