-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(common): add sendTransaction, add mempool queue to nonce manager #1717
Changes from 4 commits
b9fb594
d1513a7
661b7d0
0ede2ef
13fdb44
97fd6d7
2035c3a
8e7d8ef
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
--- | ||
"@latticexyz/common": minor | ||
--- | ||
|
||
- Added a `sendTransaction` helper to mirror viem's `sendTransaction`, but with our nonce manager | ||
- Added an internal mempool queue to `sendTransaction` and `writeContract` for better nonce handling | ||
- Defaults block tag to `pending` for transaction simulation and transaction count (when initializing the nonce manager) |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,6 +2,7 @@ import { BaseError, BlockTag, Client, Hex, NonceTooHighError, NonceTooLowError } | |
import { debug as parentDebug } from "./debug"; | ||
import { getNonceManagerId } from "./getNonceManagerId"; | ||
import { getTransactionCount } from "viem/actions"; | ||
import PQueue from "p-queue"; | ||
|
||
const debug = parentDebug.extend("createNonceManager"); | ||
|
||
|
@@ -17,12 +18,13 @@ export type CreateNonceManagerResult = { | |
nextNonce: () => number; | ||
resetNonce: () => Promise<void>; | ||
shouldResetNonce: (error: unknown) => boolean; | ||
mempoolQueue: PQueue; | ||
}; | ||
|
||
export function createNonceManager({ | ||
client, | ||
address, | ||
blockTag = "latest", | ||
address, // TODO: rename to account? | ||
blockTag = "pending", | ||
broadcastChannelName, | ||
}: CreateNonceManagerOptions): CreateNonceManagerResult { | ||
const nonceRef = { current: -1 }; | ||
|
@@ -68,10 +70,13 @@ export function createNonceManager({ | |
); | ||
} | ||
|
||
const mempoolQueue = new PQueue({ concurrency: 1 }); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. My initial prototype of the nonce manager had this but later removed it because it didn't seem to help much, but once I started working on #1702 and aligned the block tag for 1) getting initial nonce value and 2) simulating txs, I found it much more stable to have a queue in front of the mempool. Once you start sending lots of parallel txs (like we do in deploys), it's important that they land in the mempool in nonce order, and you can't really guarantee this unless you queue them like this. Submitting to mempool is fairly quick in most cases, but is slower than what we had before. #1702 does a better job at parallelizing overall, so it should be noticeably faster to deploy, even though we're technically waiting longer for submitting to the mempool. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. and yes, we could make the concurrency configurable, but I think it might be worth waiting until we have tests and can really stress test the different options to decide if it's worth allowing this to be parallelized |
||
|
||
return { | ||
hasNonce, | ||
nextNonce, | ||
resetNonce, | ||
shouldResetNonce, | ||
mempoolQueue, | ||
}; | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -10,6 +10,7 @@ export async function getNonceManagerId({ | |
address: Hex; | ||
blockTag: BlockTag; | ||
}): Promise<string> { | ||
// TODO: improve this so we don't have to call getChainId every time | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. right now, this is going to make an we can improve this by threading through a chain or chain ID from the upstream callers, but it's unclear what the "best practice" is in terms of viem client handling of chains (should they be mutable or not) and what to do for wallets that do have "mutable" chains (e.g. a Metamask wallet can switch network, but unclear if that should create a new client object or if it should mutate the chain on the existing client object) once I get clarity, I'll follow up and optimize this to remove the extra call |
||
const chainId = client.chain?.id ?? (await getChainId(client)); | ||
return `mud:createNonceManager:${chainId}:${getAddress(address)}:${blockTag}`; | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,88 @@ | ||
import { | ||
Account, | ||
CallParameters, | ||
Chain, | ||
Client, | ||
SendTransactionParameters, | ||
Transport, | ||
WriteContractReturnType, | ||
} from "viem"; | ||
import { call, sendTransaction as viem_sendTransaction } from "viem/actions"; | ||
import pRetry from "p-retry"; | ||
import { debug as parentDebug } from "./debug"; | ||
import { getNonceManager } from "./getNonceManager"; | ||
import { parseAccount } from "viem/accounts"; | ||
|
||
const debug = parentDebug.extend("sendTransaction"); | ||
|
||
// TODO: migrate away from this approach once we can hook into viem's nonce management: https://github.com/wagmi-dev/viem/discussions/1230 | ||
|
||
export async function sendTransaction< | ||
TChain extends Chain | undefined, | ||
TAccount extends Account | undefined, | ||
TChainOverride extends Chain | undefined | ||
>( | ||
client: Client<Transport, TChain, TAccount>, | ||
request: SendTransactionParameters<TChain, TAccount, TChainOverride> | ||
): Promise<WriteContractReturnType> { | ||
const account_ = request.account ?? client.account; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. my initial question was "why There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. agree on clarity! I initially had this but then switched to |
||
if (!account_) { | ||
// TODO: replace with viem AccountNotFoundError once its exported | ||
throw new Error("No account provided"); | ||
} | ||
const account = parseAccount(account_); | ||
|
||
const nonceManager = await getNonceManager({ | ||
client, | ||
address: account.address, | ||
}); | ||
|
||
async function prepare(): Promise<SendTransactionParameters<TChain, TAccount, TChainOverride>> { | ||
if (request.gas) { | ||
debug("gas provided, skipping simulate", request.to); | ||
return request; | ||
} | ||
|
||
debug("simulating tx to", request.to); | ||
await call(client, { | ||
...request, | ||
blockTag: "pending", | ||
account, | ||
} as CallParameters<TChain>); | ||
|
||
// TODO: estimate gas | ||
Comment on lines
+47
to
+54
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. is there more to do here than to use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ahh so I thought viem internally did both a call/simulate + estimate gas, but looking closer at will see about updating, but may need to also have to add gas retry handling if there's too much time between calling this function and the tx getting sent to the mem pool (due to queue and/or nonce retries) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. for context: I was initially just doing the simulate, not the gas estimate, because I wanted to know if it was safe to increment the nonce but still allow for the gas to be computed just before sending the tx but since gas estimate is ~the same RPC cost as simulate and we can attempt to send the tx with potentially outdated gas and save a call, this seems worth it (assuming we have proper retry handling for bad gas) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I attempted to do this, along with consolidating behavior, but ran into issues with being able to parse errors from viem. I've already burned a few hours on this, so I'd like to pause and come back to this another time in a follow-up: 97fd6d7 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. sounds good. Do you think it's a viem bug or something on our side? If the former, is there a repro we could share with them? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wasn't able to confirm if it was a viem issue or not. Using their StackBlitz starter kit didn't seem to surface it. But it definitely happens within the MUD repo, even though everything is pinned to exactly the same version (tried clearing all node modules and reinstalling and also tried bumping to a different exactly version, with no change in behavior I was seeing). |
||
|
||
return request; | ||
} | ||
|
||
const preparedRequest = await prepare(); | ||
|
||
return await nonceManager.mempoolQueue.add( | ||
() => | ||
pRetry( | ||
async () => { | ||
if (!nonceManager.hasNonce()) { | ||
await nonceManager.resetNonce(); | ||
} | ||
|
||
const nonce = nonceManager.nextNonce(); | ||
debug("sending tx with nonce", nonce, "to", preparedRequest.to); | ||
return await viem_sendTransaction(client, { nonce, ...preparedRequest }); | ||
}, | ||
{ | ||
retries: 3, | ||
onFailedAttempt: async (error) => { | ||
// On nonce errors, reset the nonce and retry | ||
if (nonceManager.shouldResetNonce(error)) { | ||
debug("got nonce error, retrying", error.message); | ||
await nonceManager.resetNonce(); | ||
return; | ||
} | ||
// TODO: prepare again if there are gas errors? | ||
throw error; | ||
}, | ||
} | ||
), | ||
{ throwOnTimeout: true } | ||
); | ||
Comment on lines
+61
to
+88
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. for context / my understanding: with Is this correct or am I missing something in my understanding? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
correct
correct! or, what I think I was seeing is that when parallelizing lots of txs, the node would receive the set of txs out of nonce order, so some would just fail/not be accepted by the node |
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't want to introduce a breaking change here so leaving this as a TODO