-
Notifications
You must be signed in to change notification settings - Fork 778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Snap Sync Fetchers: Highest-Known-Hash Optimization #2941
Conversation
Codecov Report
Additional details and impacted files
Flags with carried forward coverage won't be shown. Click here to find out more. |
8999c48
to
573bbe4
Compare
…to snap-tests-and-highest-hash-optimization
Optimizing the storage fetcher is more complicated since the response for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
blocking till
is merged so that the PR can be merged with keyHashing enabled
…to snap-tests-and-highest-hash-optimization
This reverts commit d026655.
will test this out today @scorbajio 👍 |
@@ -287,6 +290,11 @@ export class AccountFetcher extends Fetcher<JobTask, AccountData[], AccountData> | |||
const origin = this.getOrigin(job) | |||
const limit = this.getLimit(job) | |||
|
|||
if (this.highestKnownHash && compareBytes(limit, this.highestKnownHash) < 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahh so this solves the problem that if some range is empty it won't be requested 👍
from the code review it seems that we would need similar optimization in storage fetcher? |
Yes, but since the optimization for the storage fetcher is more complicated, I'm planning on doing it in a followup PR. |
Currently, the account and storage fetchers statically partition large hash ranges of accounts and storage slots to fetch each range sequentially. During fetching, we do not know if any or what hashes exist in a given range, so there is the potential that during a range request, a hash is returned that is higher than multiple range partitions that are queued for requesting. If this happens, the fetchers currently will still request ranges that are known to be empty after having received a hash value that is higher than the limits of queued ranges. This change will track the highest known key hash for the affected fetchers and skip requesting any queued ranges that have limit lower than the known highest hash.