-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up request buffered hashes #6318
Conversation
this is still very expensive because we blindly iterate all hashes in the hope the peer is the fallback, can we not keep track of the peer's hashes instead when registering them as fallback peers? |
yes, this is step 2 as mentioned above. the peer's hashes are already tracked in |
…p-request-buffered-hashes
…param to fill_request_from_buffer_for_peer
we don't even need to register any peer as fallback peer in that case, it's just double up storage of same data that can be derived by checking peer's seen txns against buffered hashes (buffered as in buffered for re-fetch or for first fetch if the hash didn't fit in the request that was triggered by processing the announcement in which the hash was seen the first time). |
This is how the branch is looking on mainnet rn. ![]() ![]() This is samply at commit 739d3c1 Will update with new samply from latest commit later. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do a more comprehensive review soon, but I just saw these constant names and wanted to see if they could be made shorter
/// Default soft limit for the number of hashes in a | ||
/// [`GetPooledTransactions`](reth_eth_wire::GetPooledTransactions) request, when it is filled | ||
/// from hashes pending fetch. Default is half of the | ||
/// [`SOFT_LIMIT_COUNT_HASHES_IN_GET_POOLED_TRANSACTIONS_REQUEST`] which by spec is 256 | ||
/// hashes, so 128 hashes. | ||
pub const DEFAULT_SOFT_LIMIT_COUNT_HASHES_IN_GET_POOLED_TRANSACTIONS_REQUEST_ON_FETCH_PENDING_HASHES: | ||
usize = SOFT_LIMIT_COUNT_HASHES_IN_GET_POOLED_TRANSACTIONS_REQUEST / 2; | ||
|
||
/// Default soft limit for a [`PooledTransactions`](reth_eth_wire::PooledTransactions) response | ||
/// when it's used as expected response in calibrating the filling of a | ||
/// [`GetPooledTransactions`](reth_eth_wire::GetPooledTransactions) request, when the request | ||
/// is filled from hashes pending fetch. Default is half of | ||
/// [`DEFAULT_SOFT_LIMIT_BYTE_SIZE_POOLED_TRANSACTIONS_RESPONSE_ON_PACK_GET_POOLED_TRANSACTIONS_REQUEST`], | ||
/// which defaults to 128 KiB, so 64 KiB. | ||
pub const DEFAULT_SOFT_LIMIT_BYTE_SIZE_POOLED_TRANSACTIONS_RESPONSE_ON_FETCH_PENDING_HASHES: | ||
usize = DEFAULT_SOFT_LIMIT_BYTE_SIZE_POOLED_TRANSACTIONS_RESPONSE_ON_PACK_GET_POOLED_TRANSACTIONS_REQUEST / 2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the documentation on these but these names are gigantic, is there any way to make this more concise?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I know they are, but not having enough info can cost a lot of time since then they are interpreted wrong. we already had this with some other constants recently. will eventually change some of these to const functions anyway, then we can asses the lengths again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sharing @Rjected view that some names are gigantic, but can bikeshed separately.
lgtm, nice work
closes #6148. closes #6308.
speeds up requesting buffered hashes by
- [x] dividing hashes store (unknown_hashes
,buffered_hashes
[ andmeta
]) into eth68 and eth66. this means only hashes will be traversed that can be included in the request being assembled infill_request_from_buffer_for_peer
.transactions
list onPeer
type to searchbuffered_hashes
afterpop_any_idle_peer
returns. that cache has capacity 10 240 and buffered hashes has capacity 25 600. also, then we just have to search buffered hashes, and not nested lists (even if they just default to 3 elements long) in buffered hashes for our peer returned bypop_any_idle_peer
. effectivity of this is up to how well we can update thetransactions
list in thePeer
type on-op.we will only update thewe won't move around elements in lists of peer's seen transactions. it serves only as a hint to which hash cannot be pending, not as a perfect list of which hashes are pending. this totally satisfies requirements and otherwise it doesn't scale.transactions
list onPeer
type when we need to touch it anyway, on-op.