-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable -XX:+UseCompressedOops By Default #13187
Comments
An Oracle JVM will warn but not fail if
You can also see how it will be on if you're under the threshold:
But not if you're over the threshold:
You can also see it on with:
And by default:
But not if we go over the limit:
Even if we specify the flag:
In short, I don't think there's anything to do here. |
Can we get at |
It can be obtained from a management bean on the OpenJDK line of JVMs. The IBM JVM has a similar concept (compressed pointers below 25 GB heaps) but uses a different flag. I'm not familiar enough with that VM to know kind of introspection is available at runtime (I suspect it's possible, I just don't know). That said, I'm just not sure if this is a path that we should go down. |
The trouble with the path we are on is that this is one of the first things you'll want to know when helping people with high memory usage issues. Its not required by any means but its kind of "advanced" not to use it. I mean, its one of those things that you want to have tested and profiled before doing in a high traffic environment. I could see us adding a setting that stopped elasticsearch if it couldn't verify that compressed oops was enabled with a helpful message. The user could disable the setting in elasticsearch.yaml and go on but they'd have to intentionally do it. That intentionality is what I'm looking for - right now compressed oops is something we think we rely on but we never test. We even wave our hands at the border. "Max heap is 30GB", "max heap is 31.5GB", stuff like that. If we had a error or even a warning if we went over the compressed oops boundary then we wouldn't have to be so hand wavy. |
What relies on this: where are jazillions of objects being created? In most cases at least lucene datastructures are using large byte[] or addressing into memory mapped files. |
Are you sure that this is correct? To be clear, I think that we merely recommend it as a performance optimization and to avoid heaps getting too large increasing the likelihood of long-running garbage collection cycles destroying the cluster. But I don't see that as a reason to completely prevent users that think they need very large heaps from having them. And given that, I don't see the need to add configuration complexity to Elasticsearch so that |
I haven't dug into it. Its an old, old bit of advice. Honestly @pickypg will probably know better than I what happens without it.
I am 100% sure its recommended.
That is a good point - if its really just a question of GC pauses then Elasticsearch shouldn't bother checking the flag. |
Historically, I recommended heaps smaller than 30gb in order to get compression since both ES and Lucene ended up allocating a lot of objects (and non paged ones). It also served as a relatively safe number for not going crazy when it comes to heap size and GC. Now, many of the data structures, in both Lucene and ES, are properly paged and "bulked", and with doc values, this starts to become less of a concern. I wonder how much compressed ops matter now, compared to 2 years ago. In general, to me, less than 30gb has been a good number for long GC now days (G1 excluded), so I don't think we need to add this as a flag. We did a lot to improve memory in ES and Lucene, wondering how things like doc values by default, and future improvement will end up our recommendation for heap sizes. For now, it is still safe to say max is 30gb, as we learn more. |
It seems that we've reached some consensus that we shouldn't add this. I'm going to close this issue unless someone thinks otherwise? |
Fine by me. |
To go along with our suggestion that heaps should never exceed the 32 GB barrier, we should softly enforce it by explicitly enabling
-XX:+UseCompressedOops
, which is on by default for heaps under the supported limit and off when you cross the limit. By enabling it manually, a JVM above the barrier should fail to start (unfortunately, I don't have a machine with enough RAM to test this on).I've seen murmurs that enabling the setting may lead to a performance reduction (in the comments), but this makes no sense because it is on by default. Still, this should be tested.
The text was updated successfully, but these errors were encountered: