-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stream receivebuffer continually grows #43
Comments
It's not entirely true that data will stay at the beginning of the buffer until the stream is closed, is it? If you reach a point where the But I agree that it does seem like even if you are constantly |
@havoc-io Yes, Buffer has this property. However if you look at https://github.com/hashicorp/yamux/blob/master/stream.go#L107-L112 you will see that this branch will never be taken as You can see the problem happens in the unit tests if you output the capacity of the buffer after I would fix it by replacing the Buffer with a simple slice with a behavior similar to here: https://github.com/bradfitz/http2/blob/master/buffer.go |
Ah, that's true. It's seems like a weird design decision on the part of |
The problem is the window update logic is broken. A |
I believe this issue was solved by #53, so I think it can be closed. Using the following test (which mismatches write and read speeds) on
|
I have a similar problem under production conditions like OP. Here is a pprof dump (linux, amd64), that pointed me to this issue:
The biggest jumps occur, when a new stream ( Is there anything else I could provide, apart from a test case, that reproduces the issue? |
@pbedat What version of yamux are you using, could you retest with the latest. Any repro would be great as well! |
@evanphx I was using github.com/hashicorp/yamux v0.0.0-20200609203250-aecfd211c9ce (Go 1.15) I switched to smux right after that post, which served me well. I could try to reproduce this, but I don't think it would be worth it. A lot has changed in 1.5 years 😉 |
The library currently uses bytes.Buffer for buffering data that was received from the network connection until it was fetched by the user through a
Read()
call.The problem with that is that all writes from the network will append to the end of the buffer. Reads will read from the current position and shift the read pointer to the end of the buffer, but the already read data will still stay at the beginning of the buffer - until the stream is closed.
This means e.g. for transferring a 100MB file through yamux the receivebuffer for the stream will grow to 100MB.
This could be fixed by utilizing a ring buffer, which is big enough to hold the maximum window size. Or a buffer of window size where each read copies remaining data to the front, resets the read pointer to 0 and resets the writer pointer according to it.
The text was updated successfully, but these errors were encountered: