-
Notifications
You must be signed in to change notification settings - Fork 674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support proposervm historical block deletion #1929
Conversation
@@ -134,3 +135,8 @@ func (s *blockState) PutBlock(blk block.Block, status choices.Status) error { | |||
s.blkCache.Put(blkID, &blkWrapper) | |||
return s.db.Put(blkID[:], bytes) | |||
} | |||
|
|||
func (s *blockState) DeleteBlock(blkID ids.ID) error { | |||
s.blkCache.Evict(blkID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we s.blkCache.Put(blkID, nil)
here rather than evicting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delete seems fine with me.
IIRC we peek historical blocks only via GetAncestors call which should stop at the first database.ErrNotFound. I'd rather use the cache for fresh blocks, rather than historical ones
Still need to write some unit tests - but I think this is ready for at least a first pass review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
questions and minor requests for comments
@@ -134,3 +135,8 @@ func (s *blockState) PutBlock(blk block.Block, status choices.Status) error { | |||
s.blkCache.Put(blkID, &blkWrapper) | |||
return s.db.Put(blkID[:], bytes) | |||
} | |||
|
|||
func (s *blockState) DeleteBlock(blkID ids.ID) error { | |||
s.blkCache.Evict(blkID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delete seems fine with me.
IIRC we peek historical blocks only via GetAncestors call which should stop at the first database.ErrNotFound. I'd rather use the cache for fresh blocks, rather than historical ones
@@ -599,7 +619,10 @@ func (vm *VM) repairAcceptedChainByHeight(ctx context.Context) error { | |||
|
|||
newProLastAcceptedID, err := vm.State.GetBlockIDAtHeight(innerLastAcceptedHeight) | |||
if err != nil { | |||
return err | |||
// This fatal error can happen if NumHistoricalBlocks is set too |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
q: again this shouldn't really happen unless innerVM is being clever right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's correct - this should not happen unless the inner VM had a rollback more than the proposervm is configured to support.
LGTM |
Why this should be merged
High performance blockchains issue large blocks and issue blocks frequently. Even if such blockchain has a robust state bloat prevention mechanism, the block data also needs to be maintained. This allows node operators to restrict the number of historical blocks that will be kept on disk.
It's important for a network maintainer to be aware however that historical block state may be lost if every node is configured to delete historical state. Thankfully historical block state isn't required for validators, so specialized storage solutions can be readily applied here.
How this works
proposerNumHistoricalBlocks
option to subnet configs.[lastAcceptedHeight - proposerNumHistoricalBlocks, lastAcceptedHeight]
lastAcceptedHeight=10
proposerNumHistoricalBlocks=2
indexed heights should be[8,10]
because10
is not historical,9
and8
are the 2 most recent historical blocks.How this was tested