Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(sync): avoid unnecessary block hash fetching during block sync #3241

Merged
merged 3 commits into from
Aug 28, 2020

Conversation

bowenwang1996
Copy link
Collaborator

Currently on every iteration of block sync we would recompute the hashes of blocks that we need to sync, which is extremely slow when the amount of blocks that we need to sync is large. This PR fixes it by caching the hashes and only update it when we are close to syncing all the blocks in the cache.

Test plan

  • test_block_sync
  • Run nightly tests
  • Test it on testnet.

@gitpod-io
Copy link

gitpod-io bot commented Aug 25, 2020

@codecov
Copy link

codecov bot commented Aug 25, 2020

Codecov Report

Merging #3241 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #3241   +/-   ##
=======================================
  Coverage   87.76%   87.76%           
=======================================
  Files         216      216           
  Lines       44711    44711           
=======================================
  Hits        39242    39242           
  Misses       5469     5469           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b360eed...6f93b5a. Read the comment docs.

break;
}
let block_exists = chain.block_exists(block_hash).unwrap_or(false);
if block_exists {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why only if it exists, but not if it's an orphan?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added a comment

@bowenwang1996 bowenwang1996 merged commit 9af4d6e into master Aug 28, 2020
@bowenwang1996 bowenwang1996 deleted the fix-block-sync1 branch August 28, 2020 19:45
bowenwang1996 added a commit that referenced this pull request Sep 3, 2020
…3241)

Currently on every iteration of block sync we would recompute the hashes of blocks that we need to sync, which is extremely slow when the amount of blocks that we need to sync is large. This PR fixes it by caching the hashes and only update it when we are close to syncing all the blocks in the cache.

Test plan
-----------
* `test_block_sync`
* Run nightly tests
* Test it on testnet.
bowenwang1996 added a commit that referenced this pull request Sep 7, 2020
…3241)

Currently on every iteration of block sync we would recompute the hashes of blocks that we need to sync, which is extremely slow when the amount of blocks that we need to sync is large. This PR fixes it by caching the hashes and only update it when we are close to syncing all the blocks in the cache.

Test plan
-----------
* `test_block_sync`
* Run nightly tests
* Test it on testnet.
bowenwang1996 added a commit that referenced this pull request Sep 22, 2020
#3241 introduced an issue during block sync: it maintains `last_block_header` as the block header with largest height in the cache and has this logic that checks whether it is sufficiently close to header head and if it is, then the cache is ignored. However, the issue is that the sync loop runs every 10ms and therefore, it is almost always the case that this condition is triggered and the cache is effectively ignored. This PR fixes it by removing the condition.

Test plan
---------
Sync a node to testnet with this change and observe major improvements in syncing speed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants