Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rethink sendTransaction's internal queue #2081

Closed
holic opened this issue Dec 23, 2023 · 4 comments
Closed

rethink sendTransaction's internal queue #2081

holic opened this issue Dec 23, 2023 · 4 comments

Comments

@holic
Copy link
Member

holic commented Dec 23, 2023

I think I have my head wrapped around why our sendTransaction can sort of have a "failure cascade" and why this shows up most drastically with optimistic rendering, especially when the next write depends on the previous write.

Our current approach mirrors sendTransaction but uses an in-memory nonce counter and parallelizes everything except submitting the prepared tx to the mempool (to ensure nonce ordering). It looks something like:

tx1: simulate --> send --> return
tx2: simulate -----------> send --> return
tx3: simulate --------------------> send --> return

(the nonce isn't incremented until just before send, where we send the simulated/prepared tx to the mempool with the next nonce, to ensure we don't increment the nonce if the simulation fails)

The problem shows up if tx2 or tx3 depend on state from tx1, where the parallelized simulation step might end up with incorrect results, but we end up submitting it anyway. And our current approach makes it hard to signal that we should re-simulate a given tx and all the ones after it.

To allow for re-simulating a tx after a revert and to avoid doing too much unnecessary work ahead of time, we can restructure the queue to always have the next tx (or two?) in the queue simulated before sending to the mempool, and be able to retry the simulation in case of failures:

tx1: simulate --> send --> return
tx2:              simulate --> send --> return
tx3:                           simulate --> send --> return

If a send call to the mempool reverts, a retry might look like:

tx1: simulate --> send --> return
tx2:              simulate --> send --> revert --> simulate --> send --> return
tx3:                           simulate ----------------------> simulate --> send --> return

If I recall, the simulate call is there to ensure we can safely increment the nonce and to help us get nicer errors from reverts (where a send doesn't give us this data), so we could also flip this around to only simulate on a revert or increment nonce on success:

tx1: send --> nonce++ and return
tx2:          send --> revert --> send --> revert --> simulate and return error
tx3:                                                  send --> nonce++ and return

(the above assumes we only retry a reverted send call once)

There's also an open question around whether we should flush the queue if there are consistent reverts after retries, where tx3 in the above example wouldn't execute at all.

(Edit: Nevermind, we should still simulate to determine if a tx will revert, because the mem pool will accept a valid tx even if it's expected to revert and then we'd 1) pay gas for the revert and 2) have to wait until the tx is mined before determine its success, negating this benefits of this queue)

@holic
Copy link
Member Author

holic commented Mar 3, 2024

We may want conditionals for only local accounts, since MM and others do their own nonce handling?

@tash-2s
Copy link
Contributor

tash-2s commented Mar 4, 2024

MM and others do their own nonce handling?

This MetaMask doc page says, "MetaMask doesn't allow dapp developers to customize nonces".
https://docs.metamask.io/wallet/how-to/send-transactions/#nonce

Since sending many transactions in a short period is harder with external wallets than with burner wallets, can we just use the bare sendTransaction for external wallets?

@holic
Copy link
Member Author

holic commented Mar 4, 2024

Thought about this some more, and even the above approach of simulating the transaction in parallel with the send of the previous doesn't help us much except save some latency in best cases, get reverted txs in the worst cases. My understanding is that just submitting to the mempool doesn't simulate the tx result at all, only validates the nonce and maybe also gas?

I think the best path forward for now is to just simulate immediately before sending without any parallelization (for now) and we can figure out how to speed things up later.

@holic
Copy link
Member Author

holic commented Mar 5, 2024

closing for now since the underlying issues was addressed by #2364

@holic holic closed this as completed Mar 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants