Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(common): add sendTransaction, add mempool queue to nonce manager #1717

Merged
merged 8 commits into from
Oct 9, 2023

Conversation

holic
Copy link
Member

@holic holic commented Oct 9, 2023

pulled out of #1702

  • Adds a sendTransaction helper to mirror viem's sendTransaction but with our nonce handling
  • Adds a mempool queue for better nonce handling
  • Defaults to pending for transaction simulation, transaction count (initializing the nonce manager), etc.

@changeset-bot
Copy link

changeset-bot bot commented Oct 9, 2023

🦋 Changeset detected

Latest commit: 8e7d8ef

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 30 packages
Name Type
@latticexyz/common Major
@latticexyz/block-logs-stream Major
@latticexyz/cli Major
@latticexyz/config Major
@latticexyz/dev-tools Major
@latticexyz/protocol-parser Major
@latticexyz/store-indexer Major
@latticexyz/store-sync Major
@latticexyz/store Major
@latticexyz/world-modules Major
@latticexyz/world Major
@latticexyz/react Major
@latticexyz/abi-ts Major
create-mud Major
@latticexyz/ecs-browser Major
@latticexyz/faucet Major
@latticexyz/gas-report Major
@latticexyz/network Major
@latticexyz/noise Major
@latticexyz/phaserx Major
@latticexyz/recs Major
@latticexyz/schema-type Major
@latticexyz/services Major
@latticexyz/solecs Major
solhint-config-mud Major
solhint-plugin-mud Major
@latticexyz/std-client Major
@latticexyz/std-contracts Major
@latticexyz/store-cache Major
@latticexyz/utils Major

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@@ -68,10 +70,13 @@ export function createNonceManager({
);
}

const mempoolQueue = new PQueue({ concurrency: 1 });
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial prototype of the nonce manager had this but later removed it because it didn't seem to help much, but once I started working on #1702 and aligned the block tag for 1) getting initial nonce value and 2) simulating txs, I found it much more stable to have a queue in front of the mempool.

Once you start sending lots of parallel txs (like we do in deploys), it's important that they land in the mempool in nonce order, and you can't really guarantee this unless you queue them like this. Submitting to mempool is fairly quick in most cases, but is slower than what we had before. #1702 does a better job at parallelizing overall, so it should be noticeably faster to deploy, even though we're technically waiting longer for submitting to the mempool.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and yes, we could make the concurrency configurable, but I think it might be worth waiting until we have tests and can really stress test the different options to decide if it's worth allowing this to be parallelized

};

export function createNonceManager({
client,
address,
blockTag = "latest",
address, // TODO: rename to account?
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

didn't want to introduce a breaking change here so leaving this as a TODO

@@ -10,6 +10,7 @@ export async function getNonceManagerId({
address: Hex;
blockTag: BlockTag;
}): Promise<string> {
// TODO: improve this so we don't have to call getChainId every time
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right now, this is going to make an eth_chainId call per use of writeContract and sendTransaction

we can improve this by threading through a chain or chain ID from the upstream callers, but it's unclear what the "best practice" is in terms of viem client handling of chains (should they be mutable or not) and what to do for wallets that do have "mutable" chains (e.g. a Metamask wallet can switch network, but unclear if that should create a new client object or if it should mutate the chain on the existing client object)

once I get clarity, I'll follow up and optimize this to remove the extra call

@holic holic marked this pull request as ready for review October 9, 2023 10:37
@holic holic requested a review from alvrs as a code owner October 9, 2023 10:37
alvrs
alvrs previously approved these changes Oct 9, 2023
client: Client<Transport, TChain, TAccount>,
request: SendTransactionParameters<TChain, TAccount, TChainOverride>
): Promise<WriteContractReturnType> {
const account_ = request.account ?? client.account;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my initial question was "why account_ instead of account" but i see it's because we call parseAccount on account_ to get account below. Should we call this variable unparsedAccount or rawAccount then?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree on clarity! I initially had this but then switched to account_ to match what viem uses internally, but happy to go back to the clearer name

Comment on lines +46 to +53
debug("simulating tx to", request.to);
await call(client, {
...request,
blockTag: "pending",
account,
} as CallParameters<TChain>);

// TODO: estimate gas
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there more to do here than to use estimateGas instead of call?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahh so I thought viem internally did both a call/simulate + estimate gas, but looking closer at sendTransaction and prepareTransactionRequest, it looks like it's just a gas estimate (likely because we don't need the simulated return value in this context)

will see about updating, but may need to also have to add gas retry handling if there's too much time between calling this function and the tx getting sent to the mem pool (due to queue and/or nonce retries)

Copy link
Member Author

@holic holic Oct 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for context: I was initially just doing the simulate, not the gas estimate, because I wanted to know if it was safe to increment the nonce but still allow for the gas to be computed just before sending the tx

but since gas estimate is ~the same RPC cost as simulate and we can attempt to send the tx with potentially outdated gas and save a call, this seems worth it (assuming we have proper retry handling for bad gas)

Copy link
Member Author

@holic holic Oct 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like viem does use the return data of the simulate call for better error messages in case of failures/reverts seems like both estimate gas and call revert with the same data, so I am now thinking there's something wrong with how errors are being parsed

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I attempted to do this, along with consolidating behavior, but ran into issues with being able to parse errors from viem. I've already burned a few hours on this, so I'd like to pause and come back to this another time in a follow-up: 97fd6d7

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good. Do you think it's a viem bug or something on our side? If the former, is there a repro we could share with them?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't able to confirm if it was a viem issue or not. Using their StackBlitz starter kit didn't seem to surface it. But it definitely happens within the MUD repo, even though everything is pinned to exactly the same version (tried clearing all node modules and reinstalling and also tried bumping to a different exactly version, with no change in behavior I was seeing).

Comment on lines +60 to +87
return await nonceManager.mempoolQueue.add(
() =>
pRetry(
async () => {
if (!nonceManager.hasNonce()) {
await nonceManager.resetNonce();
}

const nonce = nonceManager.nextNonce();
debug("sending tx with nonce", nonce, "to", preparedRequest.to);
return await viem_sendTransaction(client, { nonce, ...preparedRequest });
},
{
retries: 3,
onFailedAttempt: async (error) => {
// On nonce errors, reset the nonce and retry
if (nonceManager.shouldResetNonce(error)) {
debug("got nonce error, retrying", error.message);
await nonceManager.resetNonce();
return;
}
// TODO: prepare again if there are gas errors?
throw error;
},
}
),
{ throwOnTimeout: true }
);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for context / my understanding: with mempoolQueue concurrency set to 1 we wait for the transaction to be accepted by the rpc node before attempting to send the next transaction, but we're not waiting for the transaction to be included in a block, so it's still possible to include multiple transactions in one block, correct? While before we were "preparing" to see if the node would accept the transaction, but we weren't actually waiting for it to accept it before sending another one, so in some edge case it could have happened that we're sending two transactions with subsequent nonces, the first one gets dropped for some reason, and the second one is stuck?

Is this correct or am I missing something in my understanding?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with mempoolQueue concurrency set to 1 we wait for the transaction to be accepted by the rpc node before attempting to send the next transaction, but we're not waiting for the transaction to be included in a block, so it's still possible to include multiple transactions in one block, correct?

correct

While before we were "preparing" to see if the node would accept the transaction, but we weren't actually waiting for it to accept it before sending another one, so in some edge case it could have happened that we're sending two transactions with subsequent nonces, the first one gets dropped for some reason, and the second one is stuck?

correct! or, what I think I was seeing is that when parallelizing lots of txs, the node would receive the set of txs out of nonce order, so some would just fail/not be accepted by the node

@holic holic merged commit 0660561 into main Oct 9, 2023
@holic holic deleted the holic/nonce-mempool-queue branch October 9, 2023 14:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants