-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default network buffer size causes higher GC pressure than necessary #23185
Comments
To find a "good" new default value, I ran our whole benchmark suite against an instrumented build that records the number of bytes read on the network layer. Each of the graphs below show a histogram of the number of bytes read during network packet handling for the respective benchmark. If we settle on just a single value, I'd tend to set the default buffer size to 32kb as this should satisfy most allocation requests while not wasting too much memory. I will run further benchmarks to verify it. |
Should we move forward on this? It looks like a low hanging fruit that has the potential to make things significantly better for most users? |
@jpountz I totally agree. I ran a lot of benchmarks over the last few days (until today) and came to the conclusion that 32kb is indeed a good value. I just wanted to ensure that we don't have unwanted side effects. Expect a PR within the next hour. |
Previously we calculated Netty' receive predictor size for HTTP and transport traffic based on available memory and worker nodes. This resulted in a receive predictor size between 64kb and 512kb. In our benchmarks this leads to increased GC pressure. With this commit we set Netty's receive predictor size to 32kb. This value is in a sweet spot between heap memory waste (-> GC pressure) and effect on request metrics (achieved throughput and latency numbers). Closes elastic#23185
Previously we calculated Netty' receive predictor size for HTTP and transport traffic based on available memory and worker nodes. This resulted in a receive predictor size between 64kb and 512kb. In our benchmarks this leads to increased GC pressure. With this commit we set Netty's receive predictor size to 32kb. This value is in a sweet spot between heap memory waste (-> GC pressure) and effect on request metrics (achieved throughput and latency numbers). Closes #23185
Previously we calculated Netty' receive predictor size for HTTP and transport traffic based on available memory and worker nodes. This resulted in a receive predictor size between 64kb and 512kb. In our benchmarks this leads to increased GC pressure. With this commit we set Netty's receive predictor size to 32kb. This value is in a sweet spot between heap memory waste (-> GC pressure) and effect on request metrics (achieved throughput and latency numbers). Closes #23185
This is really great work @danielmitterdorfer, thank you for digging into this. |
Thanks @jasontedor. I also checked our benchmarks and in almost all cases we see an upward trajectory. However, we have still one problematic case: outside of TLAB allocations for smaller heap sizes (2GB) for the full-text benchmark increased significantly causing a drop in indexing throughput. I am currently investigating what's causing this and running further tests. So: more digging ahead. |
The investigation is not yet finished but I want to present a few intermediate results. I ran the following benchmark:
and varied heap size ( Target machine specs:
I also attached flight recorder and produced GC logs for every trial run. Here is a summary of the allocation statistics: Note:
Note: data for OOMED lines are (obviously) only until the OOME has occurred and are thus not complete. It's evident that the current receive predictor size of 32kb leads to an excessive increase in allocated objects outside of TLABs (which is way more expensive than allocation within a TLAB) for smaller heap sizes of 1GB and 2GB and we have very small average object sizes in regions outside of TLABs. I have a vague theory at the moment why this happens: When a request arrives, Netty places it into a buffer and accumulates these buffers (in a Next stepsI may do further analysis of TLAB sizes based on the raw data from the GC log if that is necessary but first I will continue the experiments with our other tracks and also a track with a mixed workload which also models a more real-world use case. Unfortunately, my original testing was based only 4GB heaps where a 32kb receive predictor size works really well for all our benchmarks. Based on this results where I look at a much broader range of heap sizes I think the pragmatic choice is 64kb. |
Okay, thanks for the continued investigation @danielmitterdorfer. I think given what you're discovering here I would recant my previous assessment and this change should stay out of 5.3 and bake a little longer. Are you in agreement with that? |
Yes @jasontedor I completely agree. I doesn't make sense to backport before we have a clear picture of the problem and a solution that works for all cases. |
I also ran the nyc_taxis track once more and the patterns we see are similar. Note:
Next up are more benchmarks with a mixed workload (indexing + querying). I am also reopening the issue as we'll definitely iterate on the value that we set by default. |
With this commit we change the default receive predictor size for Netty from 32kB to 64kB as our testing has shown that this leads to less allocations on smaller heaps like the default out of the box configuration and this value also works reasonably well for larger heaps. Closes elastic#23185
With this commit we change the default receive predictor size for Netty from 32kB to 64kB as our testing has shown that this leads to less allocations on smaller heaps like the default out of the box configuration and this value also works reasonably well for larger heaps. Closes #23185
With this commit we change the default receive predictor size for Netty from 32kB to 64kB as our testing has shown that this leads to less allocations on smaller heaps like the default out of the box configuration and this value also works reasonably well for larger heaps. Closes #23185
I did run further benchmarks including a mixed workload with queries. The tl;dr is that the main contributor these problems is bulk indexing (and not so much querying). They also confirmed that a receive predictor size of 64kB is a sensible default value. Preparing the results will take a couple of days but I'll update the ticket. In parallel I am also improving my understanding why a smaller receive predictor size (32kB) is causing much more trouble for some workloads (PMC) on smaller heaps (2GB) but it will take further experiments to isolate the cause. I could not observe significant differences in TLAB sizes for different receive predictor size settings. However, with smaller receive predictor sizes Elasticsearch seems to fill TLABs faster during request processing. During indexing, Lucene needs to allocate an excessive amount of small objects - 3.1 million totaling at a measly 81MB - outside of TLABs so the JVM is swamped with tiny allocation requests outside of TLABs (which need synchronization across all Java threads). For comparison: The top 3 outside of TLAB allocation paths in Netty are roughly 1700 allocation requests totaling almost 30GB. With larger receive predictor sizes TLABs do not fill up that quickly letting room for Lucene to allocate memory within TLABs leading to vastly improved performance. |
Interesting, do you have insights into what these Lucene allocations are? I'm wondering whether those are allocations that actually happen in Lucene or objects that we allocated for Lucene but from Elasticsearch, such as field instances. |
@jpountz Sure, I have flight recordings of all my experiments with allocation profiles enabled. Maybe I was not clear enough in my previous comment: I think that Lucene is the victim, not the culprit. If there is enough memory headroom, Lucene will happily allocate from within TLABs. The 3.1 million small objects are btw allocated in |
As I've written in my previous comment, there is not enough memory headroom for Lucene during indexing. As the other major contributor to memory allocations is Netty, this led me to investigate the impact of Netty's recycler again (see also #22452). Unless explicitly noted otherwise, this discussion applies to a heap size of 2GB (i.e. The following screenshot shows heap usage (blue) and GC pauses (red) during a (roughly) 1 minute window during a benchmark with the PMC track with the following settings:
Note: The node OOMEd during the benchmark. For the current defaults, heap usage looks as follows:
Median indexing throughput: 1176 docs/s By disabling Netty's buffer pooling, we get, much saner results:
Median indexing throughput: 1191 docs/s We see a similar behavior for the current default receive_predictor_size of 64kb:
Median indexing throughput: 1208 docs/s So we can identify Netty's pooled allocator as the cause of high memory usage with smaller heaps (specifically: 2 GB). In the current configuration it leaves little headroom for further allocations. While it may make sense to investigate further in that direction, I think this warrants a new ticket as the purpose here was to find out what caused TLAB exhaustion. I still need to wrap my head around why smaller buffers cause more memory pressure. The number of allocations and the allocation sizes look very similar. |
I'm confused, the recycler is disabled now and I do not see that you've enabled it. Did you mean buffer pool? |
Yes, I meant the buffer pool and have corrected my comment above now. Thanks for double-checking! |
Okay, now it makes sense; thank you. |
Previously we calculated Netty' receive predictor size for HTTP and transport traffic based on available memory and worker nodes. This resulted in a receive predictor size between 64kb and 512kb. In our benchmarks this leads to increased GC pressure. With this commit we set Netty's receive predictor size to 32kb. This value is in a sweet spot between heap memory waste (-> GC pressure) and effect on request metrics (achieved throughput and latency numbers). Closes #23185
With this commit we change the default receive predictor size for Netty from 32kB to 64kB as our testing has shown that this leads to less allocations on smaller heaps like the default out of the box configuration and this value also works reasonably well for larger heaps. Closes #23185
With this commit we simplify our network layer by only allowing to define a fixed receive predictor size instead of a minimum and maximum value. This also means that the following (previously undocumented) settings are removed: * http.netty.receive_predictor_min * http.netty.receive_predictor_max Using an adaptive sizing policy in the receive predictor is a very low-level optimization. The implications on allocation behavior are extremely hard to grasp (see our previous work in elastic#23185) and adaptive sizing would only be beneficial anyway if the message protocol allows very different message sizes (on network level). To determine whether these settings are beneficial, we ran the PMC and nyc_taxis benchmarks from our macrobenchmark suite with various heap settings (1GB, 2GB, 4GB, 8GB, 16GB). In one scenario we use the fixed receive predictor size (`http.netty.receive_predictor`) with 16kB, 32kB and 64kB. We contrasted this with `http.netty.receive_predictor_min` = 16KB and `http.netty.receive_predictor_max` = 64kB. The results (specifically indexing throughtput) were identical (accounting for natural run-to-run variance). In summary, these settings offer no benefit but only add complexity.
With this commit we simplify our network layer by only allowing to define a fixed receive predictor size instead of a minimum and maximum value. This also means that the following (previously undocumented) settings are removed: * http.netty.receive_predictor_min * http.netty.receive_predictor_max Using an adaptive sizing policy in the receive predictor is a very low-level optimization. The implications on allocation behavior are extremely hard to grasp (see our previous work in #23185) and adaptive sizing does not provide a lot of benefits (see benchmarks in #26165 for more details).
Netty4HttpServerTransport
uses the settingshttp.netty.receive_predictor_min
andhttp.netty.receive_predictor_max
to provide a properly configuredRecvByteBufAllocator
implementation to Netty. Their default value is controlled by the settingtransport.netty.receive_predictor_size
which varies between 64 kb and 512 kb (per allocated buffer).The before-mentioned allocator is responsible for allocating memory buffers when handling incoming network packets and Netty will allocate one buffer per network packet.
We have run comparative benchmarks (nyc_taxis) track, once locally (i.e. via loopback) and once distributed (i.e. via a Ethernet) and analyzed the allocation behavior of Elasticsearch.
Note: On this particular machine
transport.netty.receive_predictor_size
was 512kb.The root cause seems to be related to MTU (which differs greatly between loopback and regular network devices (65536 vs. 1500)). A smaller MTU leads to more network packets (but the buffer size stays the same) thus leading to more GC pressure.
In a custom build of Elasticsearch we set
http.netty.receive_predictor_min
to 5kb andhttp.netty.receive_predictor_max
to 64kb and got comparable allocation behavior between local and distributed benchmarks.Note: Our analysis focused only
Netty4HttpServerTransport
for a single Elasticsearch node. It is expected thatNetty4Transport
exhibits similar behavior and we should change the buffer size there too.The text was updated successfully, but these errors were encountered: