-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eth_chainId is called every time the provider is used #901
Comments
You are correct, this is a safety feature for v5. In v4, if a backend changed, results were simply undefined (which could result in hanging the UI, returning wrong or inconsistent data), so v5 added the ability (to match EIP-1193) to support a changing backend which requires verifying each call that the backend hasn't changed. An upcoming multi-call provider will mitigate much of this, since the multi call contract will execute these operations in a single call. For most uses it is not a huge overhead, since the default provider uses sources that intrinsically know their chain ID (and hence do not make external calls) and the next largest group of users are MetaMask (or similar clients), for which the chainId is free and does not hit the network. But you are absolutely correct that there are cases that you would like to discard safety (since you know the network won't change) and this just adds overhead. The InfuraProvider and ilk do not need to do this (since the network is intrinsic and dictates the URL to hit), so I will probably add a provider similar to: class StaticJsonRpcProvider extends JsonRpcProvider {
async getNetwork(): Promise<Network> {
if (this._network) { return Promise.resolve(this._network); }
return super.getNetwork();
}
} The However, for your case, I think there is a better solution: provider.on("block", (blockNumber) => {
contract.constantMethod()
// ...
}); Since the Also, if you are using INFURA, Make sure you are using the Make sense? Other suggestions? |
I am temporarily using Infura for development. The application expects a RPC url of an ETH node from the user, so user should be able to run their own node or choose Infura, QuickNode or any provider. Also if the application is shut for few hours and then restarted, it would start from the current block, but the logic needs it to start from where it left off previously. That's why I cannot use the I think |
There is now a StaticJsonRpcProvider in 5.0.3. Try it out and let me know if you have any issues. |
@EvilJordan There isn't really a good way around getBlockNumber. The default poll interval is 4s, but you can make it longer to reduce the number of calls. I'm contemplating using a more friendly round-robin method in the default provider, since we only need an advisory hint at new blocks, so we could relax the quorum for it. This shouldn't have changed from v4 though, which did the same thing. :) |
Thanks for adding in to the library. I changed Can you try to reproduce this? import { ethers } from 'ethers';
const provider = new ethers.providers.StaticJsonRpcProvider(
'https://mainnet.infura.io/v3/84842078b09946638c03157f83405213'
);
(async () => {
console.log('hey');
const result = await provider.getBlockNumber();
console.log('yeh');
console.log(result);
})(); The process exits immediately after However when I used the code you gave instead of importing, it works as expected. Looks like some issue with the import? import { ethers } from 'ethers';
class StaticJsonRpcProvider extends ethers.providers.JsonRpcProvider {
async getNetwork(): Promise<ethers.providers.Network> {
if (this._network) {
return Promise.resolve(this._network);
}
return super.getNetwork();
}
}
const provider = new StaticJsonRpcProvider(
'https://mainnet.infura.io/v3/84842078b09946638c03157f83405213'
);
(async () => {
console.log('hey');
const result = await provider.getBlockNumber();
console.log('yeh');
console.log(result);
})(); |
You're right. I don't know what I broke, but something. I'll look into this immediately. |
This should be fixed in 5.0.4. Try it out and let me know. :) |
Works super!! |
This change was due to eth_chainId checks in JsonRpcProvider in every call. Discussion thread: ethers-io/ethers.js#901
Oh. That’s odd. Maybe I didn’t add the needed logic to the WebSocketProvider to short-circuit that check. I’ll look into that next week because that is definitely unnecessary. I think |
Opened #1054 to track caching chainId. |
Since ethers v5, all RPC requests have been preceed by an eth_chainId lookup. This is described by ethers as a safety feature to mitigate being "RPC rugged" - i.e. where a wallet silently changes RPC without notifying the provider. Inspecting the Across logs, eth_chainId is the second-most popular RPC call, after only eth_getLogs. Over the past 30 days, Infura usage has been: eth_getLogs 264M eth_chainId 30M eth_call 24M eth_getTransactionReceipt 11M eth_blockNumber 3M Total 336M Given that the Across bots maintain a 1:1 relationship between provider instance and back-end RPC provider, there's no obvious way for the chainId to ever change. The StaticJsonRpcProvider is therefore provided by ethers for this scenario. It should be noted that the 30 day figures might be lower than normal due to downtime for both Arbitrum Nitro and the merge. In any case, migrating to StaticJsonRpcProvider would reduce total requests by about 9% based on these figures, and would predominantly help reduce latency in the bots. See also: ethers-io/ethers.js#901 Ref: ACX-67
Since ethers v5, all RPC requests have been preceed by an eth_chainId lookup. This is described by ethers as a safety feature to mitigate being "RPC rugged" - i.e. where a wallet silently changes RPC without notifying the provider. Inspecting the Across logs, eth_chainId is the second-most popular RPC call, after only eth_getLogs. Over the past 30 days, Infura usage has been: eth_getLogs 264M eth_chainId 30M eth_call 24M eth_getTransactionReceipt 11M eth_blockNumber 3M Total 336M Given that the Across bots maintain a 1:1 relationship between provider instance and back-end RPC provider, there's no obvious way for the chainId to ever change. The StaticJsonRpcProvider is therefore provided by ethers for this scenario. It should be noted that the 30 day figures might be lower than normal due to downtime for both Arbitrum Nitro and the merge. In any case, migrating to StaticJsonRpcProvider would reduce total requests by about 9% based on these figures, and would predominantly help reduce latency in the bots. See also: ethers-io/ethers.js#901
…ests Reference issue - ethers-io/ethers.js#901 Cuts our RPC request cost approx in half by eliminating a sequential call to get `chainId` from every(!) contract call we make
…ests Reference issue - ethers-io/ethers.js#901 Cuts our RPC request cost approx in half by eliminating a sequential call to get `chainId` from every(!) contract call we make
Reference issue - ethers-io/ethers.js#901 Cuts our RPC request cost approx in half by eliminating a sequential call to get `chainId` from every(!) contract call we make
…requests Reference issue - ethers-io/ethers.js#901 Cuts our RPC request cost approx in half by eliminating a sequential call to get `chainId` from every(!) contract call we make
Hi, I am seeing a lot of unexpected eth_chainId calls:
My application is a node server calling
provider.getBlockNumber()
andcontract.constantMethod()
repeatedly. Based on the response it decides to build a transaction by callingprovider.send('eth_getBlockByNumber')
for a range of blocks (since I need transaction and receipt roots of these blocks hence justifies 5725 calls) send a transaction (justifies 46 calls toeth_gasPrice
). The application has to perform this routine 24x7.If number of
eth_blockNumber
requests andeth_call
requests is added, it approximately gives number of requests toeth_chainId
. Looks like everyeth_blockNumber
andeth_call
requestseth_chainId
before the actual call while custom methods using.send
don't do that.At first, it comes to my mind that this can be some sort of safety check to prevent errors due to changing backend. Most of the developers might not be concerned about these calls with a new dapp, but if users increase then it would double the costs, this would then be an issue for them. Can something be done to reduce requests?
The text was updated successfully, but these errors were encountered: