-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adjust receive window to make them linear to the count of streams #33913
Merged
lijunwangs
merged 1 commit into
solana-labs:master
from
lijunwangs:adjust_receive_window
Nov 14, 2023
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not very familiar with how we use quic, so forgive my long and possibly out of scope questions. I'm trying to understand the effect these constants have.
This means an unstaked peer can send 128 *
PACKET_DATA_SIZE
bytes before being blocked, right?PACKET_DATA_SIZE
will not include IP, or Quic header bytes, so this means it would be slightly less than 128 max-length transactions?This is a big jump from 1 to 128 for unstaked peers. Does this 128x the amount of unstaked traffic allowed by the quic-server? Are there concerns about overwhelming the server?
When does this receive window get reset? i.e. the thin-client is spamming transactions all the time during bench-tps, so I'd expect it to nearly always have more than 128 txs in-flight.
Would they get reset on acknowledgement?
What does that mean for my packets if say 1024 packets arrive very close in time to each other? Probably assume the sw quic server would not be fast enough to respond before all packets physically arrive. Would the first 128 be accepted and the rest just dropped?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, the receive window is set to linear to the amount of the the streams allowed. Each connection will be allowed to consume stream count limit x the PACKET_DATA_SIZE. The amounts to about 128*1232 = 150K at maximum for a unstaked connection. We allow 500 unstaked connections -- that amounts to about 73MB max for unstaked connections. I think that is a reasonable number. In any case my thinking is the number of streams and the receive window ratio should be linearly set. Having the large amount of streams with small receive window can 1. reduce throughput because of fragmentation and 2. can actually increase the load of both CPU and memory. For the CPU, the client and server need to maintain the states and doing the pulling on these blocked streams and for memory -- we are actually buffering the data in the quic streamer anyway until the full stream data is received. So for the worst case scenario I do not think it is increasing the max memory consumption. The receive window is dynamically adjusted by the quic protocol layer as data is received and consumed -- it is sliding window.
A good read explaining this mechanism is https://docs.google.com/document/d/1F2YfdDXKpy20WVKJueEf4abn_LVZHhMUMS5gX6Pgjl4/mobilebasic.
QUIC protocol will issue blocked message to the stream if the receive window is being exceeded, the client will be blocked and will need to wait for notifications of the WINDOW_UPDATE. For client maliciously ignoring the receive window mechanism, the connection can be dropped by the server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems this will increase the receive window for all the streams for a given connection. So just to understand, this can potentially increase the RAM usage for a given node. Is that correct? If so, in worst case what's the increase in RAM usage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the worst case scenario which is malicious clients intentionally holds off finalizing the streams and the streamer actually fails to pull off the data from quinn protocol layer, the memory consumed will be
128x1232x512/1024^2 = 73MB for unstaked
512x1232x2000/1024^2 = 1203 MB for staked.
But we are very unlikely to run into this situation as we actively polling data from Quinn into streamer's internal buffers. In that case, even the client intentionally stopping finalizing the data it would not actually increase the net worst case memory pressure as the data would have been buffered at the streamer's temporary buffers anyway which are dictated by the stream_per_connection_count * connection_count.