-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merged by Bors] - Limit concurrent gethash in getatxs #5442
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## develop #5442 +/- ##
=========================================
+ Coverage 77.4% 77.6% +0.1%
=========================================
Files 265 266 +1
Lines 30889 30955 +66
=========================================
+ Hits 23936 24025 +89
+ Misses 5432 5406 -26
- Partials 1521 1524 +3 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think it is more common to use channels in golang for this. perhaps it will be slightly more efficient with semaphore library. one approach would be to create a channel with N items, fill it with that number of tokens, and then control concurrency by blocking on token read.
the small advantage would be that this pattern is selectable, so can be easily interrupted
var eg errgroup.Group | ||
var errs error | ||
var mu sync.Mutex | ||
for _, hash := range hashes { | ||
for i, hash := range hashes { | ||
if err := options.limiter.Acquire(ctx, 1); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like this is interruptible as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes Acquire
blocks until a slot is available. ctx
allows an early cancellation. I would also expect Acquire
to return ctx.Err()
as its error when the context is done.
I think the It has a nice API and itself only depends on the standard library. It would also allow to weight certain queries more strongly than others (i.e. when certain requests would be more costly than others). |
bors merge |
## Motivation `Fetcher::GetAtxs()` might spawn tens (hundreds) of concurrent _get hash_ requests and all responses will be queued up in the ATX validator callback. There should be some backpressure to avoid querying more ATXs at one time than we can reasonably handle. ## Changes - use a semaphore as a request limiter to limit the number of concurrent `Fetcher::getHash()` for ATX sync, - added a _pending hash requests_ gauge metric, ## Test Plan TODO
bors cancel |
Canceled. |
bors merge |
## Motivation `Fetcher::GetAtxs()` might spawn tens (hundreds) of concurrent _get hash_ requests and all responses will be queued up in the ATX validator callback. There should be some backpressure to avoid querying more ATXs at one time than we can reasonably handle. ## Changes - use a semaphore as a request limiter to limit the number of concurrent `Fetcher::getHash()` for ATX sync, - added a _pending hash requests_ gauge metric, - cleaned up unused stuff in fetcher/handler.go
Build failed: |
Bors merge |
## Motivation `Fetcher::GetAtxs()` might spawn tens (hundreds) of concurrent _get hash_ requests and all responses will be queued up in the ATX validator callback. There should be some backpressure to avoid querying more ATXs at one time than we can reasonably handle. ## Changes - use a semaphore as a request limiter to limit the number of concurrent `Fetcher::getHash()` for ATX sync, - added a _pending hash requests_ gauge metric, - cleaned up unused stuff in fetcher/handler.go
Build failed: |
bors merge |
## Motivation `Fetcher::GetAtxs()` might spawn tens (hundreds) of concurrent _get hash_ requests and all responses will be queued up in the ATX validator callback. There should be some backpressure to avoid querying more ATXs at one time than we can reasonably handle. ## Changes - use a semaphore as a request limiter to limit the number of concurrent `Fetcher::getHash()` for ATX sync, - added a _pending hash requests_ gauge metric, - cleaned up unused stuff in fetcher/handler.go
Build failed: |
network errors in grpc streams - retrying bors merge |
## Motivation `Fetcher::GetAtxs()` might spawn tens (hundreds) of concurrent _get hash_ requests and all responses will be queued up in the ATX validator callback. There should be some backpressure to avoid querying more ATXs at one time than we can reasonably handle. ## Changes - use a semaphore as a request limiter to limit the number of concurrent `Fetcher::getHash()` for ATX sync, - added a _pending hash requests_ gauge metric, - cleaned up unused stuff in fetcher/handler.go
Build failed: |
bors merge |
## Motivation `Fetcher::GetAtxs()` might spawn tens (hundreds) of concurrent _get hash_ requests and all responses will be queued up in the ATX validator callback. There should be some backpressure to avoid querying more ATXs at one time than we can reasonably handle. ## Changes - use a semaphore as a request limiter to limit the number of concurrent `Fetcher::getHash()` for ATX sync, - added a _pending hash requests_ gauge metric, - cleaned up unused stuff in fetcher/handler.go
Pull request successfully merged into develop. Build succeeded: |
## Motivation `Fetcher::GetAtxs()` might spawn tens (hundreds) of concurrent _get hash_ requests and all responses will be queued up in the ATX validator callback. There should be some backpressure to avoid querying more ATXs at one time than we can reasonably handle. ## Changes - use a semaphore as a request limiter to limit the number of concurrent `Fetcher::getHash()` for ATX sync, - added a _pending hash requests_ gauge metric, - cleaned up unused stuff in fetcher/handler.go
Motivation
Fetcher::GetAtxs()
might spawn tens (hundreds) of concurrent get hash requests and all responses will be queued up in the ATX validator callback. There should be some backpressure to avoid querying more ATXs at one time than we can reasonably handle.Changes
Fetcher::getHash()
for ATX sync,