-
Notifications
You must be signed in to change notification settings - Fork 712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fetch non-cached reports in parallel #1554
Conversation
reports = append(reports, rep) | ||
c.cache.Set(reportKey, rep) | ||
reports = append(reports, *r.report) | ||
c.cache.Set(r.key, *r.report) |
This comment was marked as abuse.
This comment was marked as abuse.
Sorry, something went wrong.
This comment was marked as abuse.
This comment was marked as abuse.
Sorry, something went wrong.
This comment was marked as abuse.
This comment was marked as abuse.
Sorry, something went wrong.
Did you see any performance improvement running this on your laptop? |
Haven't actually run it on my laptop, so will at least do that before landing. I'll also at least attempt to write unit tests (but might end up deciding they're too hard). |
Ran this locally & verified that it doesn't break. No metrics. Won't bother with unit tests now. @tomwilkie OK to merge? |
@tomwilkie asked for metrics: BeforeAfterRunning on my laptop locally. |
LGTM |
Currently, if a bunch of requested reports aren't in the cache, we fetch them one at a time inline with the request. This patch updates the code to fetch reports in parallel instead.
While making the change, I noticed that we handled S3 errors differently from gzip & decoder errors. I've updated the code to handle all errors in the same way (i.e. no special logging, just return).
Currently,
getNonCached
returns only the first error it finds. It should probably aggregate all of the errors somehow so we aren't silently ignoring important conditions. I don't know how best to do that.Along the way, I extracted the logic for fetching a single report (to make the actual parallelism logic more clear).
This change is