-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Throttling Support #139
Comments
Initial throttling support added in v5.0.6. The throttling is currently only available on the server-side. It may still make sense to support client-side throttling, but for the time being I'm going to close this issue. If anyone has a need for client-side throttling, please re-open. I worry that this may cause large backlogs though, which might result in apps becoming unable to keep up... |
@ricmoo what is the difference between server-side and client-side throttling in this feature? I assume that same ethers package from npm if run on nodejs, would be server-side and if bundled with something like webpack, for client side. So what makes this feature only work on nodejs? Or am I getting the context wrong? |
Server-side means the server can complain that it is too busy, and send back a response (429 status or allow custom throwing a throttle error during processFunc) to initiate throttling. The developer does not need to do anything and the library will automatically throttle requests based on the server responses. Client-side would allow the developer to specify a maximum request rate, which would be enforced by library to stall the next request until that duration has elapsed since the last request. I’ve used this technique in an iOS wallet though, and it led to “bunching”. Basically all requests get queued up and if requests come in faster than the rate duration, the queue grows indefinitely. And memory pressure issues and stale data ensues... So doesn’t have anything to do with node vs browser, etc. :) Make sense? |
(maybe it’s easier to think about: from the point-of-view of the library, the Provider is always a client, regardless of whether it is being used in a server or an app) |
Oh, I get this, thanks for explaining! So I think it's great that to server-side throttle is added, as it would make it possible to go all out. I'm not sure why client-side throttle would be needed, as request-per-second of client-side cannot exceed that of server-side and if it did, then the server-side would be needed to control errors. Edit: I just realized that client-side might be needed in a case when server side rate limiting has a big interva, e.g. blockcypher for bitcoin apis has 200 requests per hour, so one can have a client-side throttling for a lower interval (1 request per 20 sec) in such application. However, I don't think there is any ethereum provider who is as devil as blockcypher for bitcoin in terms of server-side throttling. |
At the time I thought of this issue, I didn't know (and possibly they didn't?) that INFURA and Etherscan provided meaningful server-side throttle errors, but their (at the time, ignored) nominal rate limits were published. The plan was to bake these into the various providers. But I agree, server-side is so much nicer, from a developer point-of-view and allows the server to have some additional control. Alchemy uses the Retry-After header and I'll be bugging INFURA about addition it too. The ethers library will honour it if present, otherwise falls back onto normal exponential back-off. :) |
FYI, I'm seeing 429 errors in my app but I'm never hitting |
The A 429 simply guides the exponential back-off logic (unless it includes to retry fields, in which case those are honoured). |
Much thanks, friend. Do you have any built-in handling where I can tap in and prevent retries? I'm actually not positive that will be best practice, but I'm curious if it's possible. |
There is no ability in v5, but will be adding more flexibility in v6 to the Connection object . But it is probably a bad idea to prevent retries entirely. Exponential back-off is your friend. :) Why do you want to stop them? |
I'm not sure that I want to stop them entirely. It was just something we were thinking about & wanting to experiment with. I see that you have a utility for In the meantime I'm looking at setting Really appreciate your feedback. Thank you. |
If you use the FallbackProvider, it will already do much of that for you. You can specify a longer stallTimeout before it attempts the next provider in the chain, and allows each provider to be given more or less weight. The default polling interval is 4000ms. Only set it to less than that if connecting to a local Geth node, otherwise you will definitely trigger throttles from the backends. :) |
To prevent being soft-banned from INFURA or Etherscan, or preventing DoS on a node, it would be useful to have throttling available for providers.
provider.maximumRequestsPerMinute
provider.maximumConcurrentRequests
provider.maximumBacklog
The text was updated successfully, but these errors were encountered: