Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: address slowness in block sync #3345

Merged
merged 2 commits into from
Sep 22, 2020
Merged

fix: address slowness in block sync #3345

merged 2 commits into from
Sep 22, 2020

Conversation

bowenwang1996
Copy link
Collaborator

#3241 introduced an issue during block sync: it maintains last_block_header as the block header with largest height in the cache and has this logic that checks whether it is sufficiently close to header head and if it is, then the cache is ignored. However, the issue is that the sync loop runs every 10ms and therefore, it is almost always the case that this condition is triggered and the cache is effectively ignored. This PR fixes it by removing the condition.

Test plan

Sync a node to testnet with this change and observe major improvements in syncing speed.

@gitpod-io
Copy link

gitpod-io bot commented Sep 20, 2020

let close_to_header_head =
last_header.height() > header_head.height.saturating_sub(self.block_fetch_horizon);
if self.cache.len() < BLOCK_SYNC_CACHE_LOWER_LIMIT || close_to_header_head {
if (self.cache.len() as u64) < self.block_fetch_horizon {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SkidanovAlex I think this will make the node more susceptible to eclipse attacks. To address this, we can attempt to check whether the last header is on the same chain as header head, but I don't think it's worth spending the time to polish it, given that we will likely change this code completely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants