-
Notifications
You must be signed in to change notification settings - Fork 648
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance opt pt 1 #1359
Performance opt pt 1 #1359
Changes from all commits
72c0d21
5e1af9e
b2bafa4
4e6d2d4
e9f273d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -76,6 +76,7 @@ void database::reindex( fc::path data_dir ) | |
|
||
uint32_t skip = skip_witness_signature | | ||
skip_block_size_check | | ||
skip_merkle_check | | ||
skip_transaction_signatures | | ||
skip_transaction_dupe_check | | ||
skip_tapos_check | | ||
|
@@ -84,6 +85,7 @@ void database::reindex( fc::path data_dir ) | |
|
||
size_t total_processed_block_size; | ||
size_t total_block_size = _block_id_to_block.total_block_size(); | ||
const auto& gpo = get_global_properties(); | ||
for( uint32_t i = head_block_num() + 1; i <= last_block_num; ++i ) | ||
{ | ||
if( i % 10000 == 0 ) | ||
|
@@ -106,6 +108,8 @@ void database::reindex( fc::path data_dir ) | |
flush(); | ||
ilog( "Done" ); | ||
} | ||
if( head_block_time() >= last_block->timestamp - gpo.parameters.maximum_time_until_expiration ) | ||
skip &= ~skip_transaction_dupe_check; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Although this is better than the old code to prevent duplicate transactions from being pushed, I think it's not 100% safe due to the use of Ideally we should use the timestamp of head block after replayed as Alternatively, we can skip dupe-check during replay, and revisit the pushed blocks to fill the dupe-check db just before enabling undo_db. In order to determine how many blocks to revisit, we can keep track of the greatest There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You are correct, but I think it doesn't matter. The dupe check is really only relevant when a witness signs a new block, and that happens only close to now(). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. My point is it's not only about block generation but also about block validation. In case when witnesses colluded signing/accepting blocks with duplicate transactions, other nodes remain the ability to recognize the misbehavior and reject invalid blocks. I agree practically it's not a big deal. We've been running without that check for years, due to restrict timing/conditions required to exploit the potential issue. E.G. it only affects nodes that just finished a replay, but during normal operations most nodes won't be in that state. IMHO we can leave the logic as proposed in this PR, but better add some comments there explaining why we did so and what scenarios have been considered and why ignored. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. After rethinking I switched to last_block->timestamp instead of now. After a chain halt (of course these won't happen again, so it doesn't really matter) it wouldn't require colluding witnesses - any witness could sneak a block with a dupe into the gap. |
||
fc::optional< signed_block > block = _block_id_to_block.fetch_by_number(i); | ||
if( !block.valid() ) | ||
{ | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that you did this. It is more easily readable IMO. Currently, you're protected in all cases with the ifs. But hopefully someone later will know better to not trust that trx_id is set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, will address this in #1360 (id is cached inside transaction there)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid using an uninitialized id, perhaps wrap it with
optional
? May cause some overheads though.