-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(common): add sendTransaction, add mempool queue to nonce manager #1717
Conversation
🦋 Changeset detectedLatest commit: 8e7d8ef The changes in this PR will be included in the next version bump. This PR includes changesets to release 30 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
@@ -68,10 +70,13 @@ export function createNonceManager({ | |||
); | |||
} | |||
|
|||
const mempoolQueue = new PQueue({ concurrency: 1 }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My initial prototype of the nonce manager had this but later removed it because it didn't seem to help much, but once I started working on #1702 and aligned the block tag for 1) getting initial nonce value and 2) simulating txs, I found it much more stable to have a queue in front of the mempool.
Once you start sending lots of parallel txs (like we do in deploys), it's important that they land in the mempool in nonce order, and you can't really guarantee this unless you queue them like this. Submitting to mempool is fairly quick in most cases, but is slower than what we had before. #1702 does a better job at parallelizing overall, so it should be noticeably faster to deploy, even though we're technically waiting longer for submitting to the mempool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and yes, we could make the concurrency configurable, but I think it might be worth waiting until we have tests and can really stress test the different options to decide if it's worth allowing this to be parallelized
}; | ||
|
||
export function createNonceManager({ | ||
client, | ||
address, | ||
blockTag = "latest", | ||
address, // TODO: rename to account? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't want to introduce a breaking change here so leaving this as a TODO
@@ -10,6 +10,7 @@ export async function getNonceManagerId({ | |||
address: Hex; | |||
blockTag: BlockTag; | |||
}): Promise<string> { | |||
// TODO: improve this so we don't have to call getChainId every time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right now, this is going to make an eth_chainId
call per use of writeContract
and sendTransaction
we can improve this by threading through a chain or chain ID from the upstream callers, but it's unclear what the "best practice" is in terms of viem client handling of chains (should they be mutable or not) and what to do for wallets that do have "mutable" chains (e.g. a Metamask wallet can switch network, but unclear if that should create a new client object or if it should mutate the chain on the existing client object)
once I get clarity, I'll follow up and optimize this to remove the extra call
client: Client<Transport, TChain, TAccount>, | ||
request: SendTransactionParameters<TChain, TAccount, TChainOverride> | ||
): Promise<WriteContractReturnType> { | ||
const account_ = request.account ?? client.account; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my initial question was "why account_
instead of account
" but i see it's because we call parseAccount
on account_
to get account
below. Should we call this variable unparsedAccount
or rawAccount
then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree on clarity! I initially had this but then switched to account_
to match what viem uses internally, but happy to go back to the clearer name
debug("simulating tx to", request.to); | ||
await call(client, { | ||
...request, | ||
blockTag: "pending", | ||
account, | ||
} as CallParameters<TChain>); | ||
|
||
// TODO: estimate gas |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there more to do here than to use estimateGas
instead of call
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahh so I thought viem internally did both a call/simulate + estimate gas, but looking closer at sendTransaction
and prepareTransactionRequest
, it looks like it's just a gas estimate (likely because we don't need the simulated return value in this context)
will see about updating, but may need to also have to add gas retry handling if there's too much time between calling this function and the tx getting sent to the mem pool (due to queue and/or nonce retries)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for context: I was initially just doing the simulate, not the gas estimate, because I wanted to know if it was safe to increment the nonce but still allow for the gas to be computed just before sending the tx
but since gas estimate is ~the same RPC cost as simulate and we can attempt to send the tx with potentially outdated gas and save a call, this seems worth it (assuming we have proper retry handling for bad gas)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like viem does use the return data of the simulate call for better error messages in case of failures/reverts seems like both estimate gas and call revert with the same data, so I am now thinking there's something wrong with how errors are being parsed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I attempted to do this, along with consolidating behavior, but ran into issues with being able to parse errors from viem. I've already burned a few hours on this, so I'd like to pause and come back to this another time in a follow-up: 97fd6d7
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good. Do you think it's a viem bug or something on our side? If the former, is there a repro we could share with them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't able to confirm if it was a viem issue or not. Using their StackBlitz starter kit didn't seem to surface it. But it definitely happens within the MUD repo, even though everything is pinned to exactly the same version (tried clearing all node modules and reinstalling and also tried bumping to a different exactly version, with no change in behavior I was seeing).
return await nonceManager.mempoolQueue.add( | ||
() => | ||
pRetry( | ||
async () => { | ||
if (!nonceManager.hasNonce()) { | ||
await nonceManager.resetNonce(); | ||
} | ||
|
||
const nonce = nonceManager.nextNonce(); | ||
debug("sending tx with nonce", nonce, "to", preparedRequest.to); | ||
return await viem_sendTransaction(client, { nonce, ...preparedRequest }); | ||
}, | ||
{ | ||
retries: 3, | ||
onFailedAttempt: async (error) => { | ||
// On nonce errors, reset the nonce and retry | ||
if (nonceManager.shouldResetNonce(error)) { | ||
debug("got nonce error, retrying", error.message); | ||
await nonceManager.resetNonce(); | ||
return; | ||
} | ||
// TODO: prepare again if there are gas errors? | ||
throw error; | ||
}, | ||
} | ||
), | ||
{ throwOnTimeout: true } | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for context / my understanding: with mempoolQueue
concurrency set to 1 we wait for the transaction to be accepted by the rpc node before attempting to send the next transaction, but we're not waiting for the transaction to be included in a block, so it's still possible to include multiple transactions in one block, correct? While before we were "preparing" to see if the node would accept the transaction, but we weren't actually waiting for it to accept it before sending another one, so in some edge case it could have happened that we're sending two transactions with subsequent nonces, the first one gets dropped for some reason, and the second one is stuck?
Is this correct or am I missing something in my understanding?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with mempoolQueue concurrency set to 1 we wait for the transaction to be accepted by the rpc node before attempting to send the next transaction, but we're not waiting for the transaction to be included in a block, so it's still possible to include multiple transactions in one block, correct?
correct
While before we were "preparing" to see if the node would accept the transaction, but we weren't actually waiting for it to accept it before sending another one, so in some edge case it could have happened that we're sending two transactions with subsequent nonces, the first one gets dropped for some reason, and the second one is stuck?
correct! or, what I think I was seeing is that when parallelizing lots of txs, the node would receive the set of txs out of nonce order, so some would just fail/not be accepted by the node
This reverts commit 2796077c0a474758e7a11321d5d66d0f9a8e9944.
pulled out of #1702
sendTransaction
helper to mirror viem'ssendTransaction
but with our nonce handlingpending
for transaction simulation, transaction count (initializing the nonce manager), etc.