-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server-side gRPC flow control results in small window sizes and client message backlog #11723
Comments
TCP is sorta neither here nor there, as we have per-connection flow control already. It is also very hard to use such signals while also avoiding stream starvation/unfairness. Do you have a limit to the number of concurrent RPCs you're performing? Unary RPCs don't have flow control either, and if you create an unbounded number of them you'd experience the same problem. |
Currently, we use onReady before onNext, but memory overflow still occurs due to too many streams. However, in our real business scenarios, we do need streams of this magnitude. Is there a plan to expose the correspondence between netty channel and stream to the application layer? |
We aren't going to expose TCP details to the streams. It wouldn't work as it is inherently unfair. This issue really doesn't describe your problem; I would hope to see numbers. It more asserts the problem and expects a particular solution. But there are other options. Is the memory use actually expected, or is the problem simply #11719? Or should the per-stream buffer be reduced in this case from its default of 32 KiB using CallOptions.withOnReadyThreshold()? Or is the memory use expected and you should increase the JVM's |
Without knowing more, I don't see anything more that can be done here. Closing, but comment with more info and it can be reopened. Note also that if direct memory is the main problem (the amount of memory is fine, just the type of memory is the problem), it is possible to make your own ByteBufAllocator instance that prefers heap memory and pass it to gRPC's builders with Netty's |
We are experiencing an issue with gRPC where the server-side stalls, leading to very small flow control window sizes. This causes gRPC client messages to accumulate in the
DefaultHttp2RemoteFlowController.pendingWriteQueue
.During high throughput scenarios with numerous streams, we have noticed that even when employing
stream.isReady()
for flow control, the off-heap memory usage significantly increases, ultimately leading to an Out of Memory (OOM) situation.Currently, flow control is primarily based at the HTTP/2 layer. Would it be possible to expose some TCP-level metrics, such as a TCP equivalent of
isReady()
? This could help in finer-grained resource management, especially under high concurrency situations.Having more granular control over traffic at the TCP level might help alleviate performance issues due to flow control, while also reducing the risk of OOM due to excessive memory pressure.
Is there any plan from the gRPC team to consider such TCP-level improvements in future releases? Or, are there other recommended approaches to address such issues?
Thank you!
The text was updated successfully, but these errors were encountered: