-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rethink sendTransaction
's internal queue
#2081
Comments
We may want conditionals for only local accounts, since MM and others do their own nonce handling? |
This MetaMask doc page says, "MetaMask doesn't allow dapp developers to customize nonces". Since sending many transactions in a short period is harder with external wallets than with burner wallets, can we just use the bare sendTransaction for external wallets? |
Thought about this some more, and even the above approach of simulating the transaction in parallel with the send of the previous doesn't help us much except save some latency in best cases, get reverted txs in the worst cases. My understanding is that just submitting to the mempool doesn't simulate the tx result at all, only validates the nonce and maybe also gas? I think the best path forward for now is to just simulate immediately before sending without any parallelization (for now) and we can figure out how to speed things up later. |
closing for now since the underlying issues was addressed by #2364 |
I think I have my head wrapped around why our
sendTransaction
can sort of have a "failure cascade" and why this shows up most drastically with optimistic rendering, especially when the next write depends on the previous write.Our current approach mirrors
sendTransaction
but uses an in-memory nonce counter and parallelizes everything except submitting the prepared tx to the mempool (to ensure nonce ordering). It looks something like:(the nonce isn't incremented until just before
send
, where we send the simulated/prepared tx to the mempool with the next nonce, to ensure we don't increment the nonce if the simulation fails)The problem shows up if
tx2
ortx3
depend on state fromtx1
, where the parallelized simulation step might end up with incorrect results, but we end up submitting it anyway. And our current approach makes it hard to signal that we should re-simulate a given tx and all the ones after it.To allow for re-simulating a tx after a revert and to avoid doing too much unnecessary work ahead of time, we can restructure the queue to always have the next tx (or two?) in the queue simulated before sending to the mempool, and be able to retry the simulation in case of failures:
If a
send
call to the mempool reverts, a retry might look like:If I recall, thesimulate
call is there to ensure we can safely increment the nonce and to help us get nicer errors from reverts (where asend
doesn't give us this data), so we could also flip this around to only simulate on a revert or increment nonce on success:(the above assumes we only retry a reverted
send
call once)There's also an open question around whether we should flush the queue if there are consistent reverts after retries, where
tx3
in the above example wouldn't execute at all.(Edit: Nevermind, we should still simulate to determine if a tx will revert, because the mem pool will accept a valid tx even if it's expected to revert and then we'd 1) pay gas for the revert and 2) have to wait until the tx is mined before determine its success, negating this benefits of this queue)
The text was updated successfully, but these errors were encountered: