Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable -XX:+UseCompressedOops By Default #13187

Closed
pickypg opened this issue Aug 28, 2015 · 10 comments
Closed

Enable -XX:+UseCompressedOops By Default #13187

pickypg opened this issue Aug 28, 2015 · 10 comments
Labels

Comments

@pickypg
Copy link
Member

pickypg commented Aug 28, 2015

To go along with our suggestion that heaps should never exceed the 32 GB barrier, we should softly enforce it by explicitly enabling -XX:+UseCompressedOops, which is on by default for heaps under the supported limit and off when you cross the limit. By enabling it manually, a JVM above the barrier should fail to start (unfortunately, I don't have a machine with enough RAM to test this on).

I've seen murmurs that enabling the setting may lead to a performance reduction (in the comments), but this makes no sense because it is on by default. Still, this should be tested.

@jasontedor
Copy link
Member

An Oracle JVM will warn but not fail if -XX:+UseCompressedOops is set and in conflict with the heap size:

13:59:30 [jason:~/src/oops] $ java -Xmx32g -XX:+UseCompressedOops Oops  
Java HotSpot(TM) 64-Bit Server VM warning: Max heap size too large for Compressed Oops
Hello, oops!

You can also see how it will be on if you're under the threshold:

14:00:04 [jason:~/src/oops] $ java -Xmx30g -XX:+UseCompressedOops -XX:+UnlockDiagnosticVMOptions -XX:+PrintCompressedOopsMode Oops
Protected page at the reserved heap base: 0x000000010af80000 / 524288 bytes
heap address: 0x000000010b000000, size: 30802 MB, Compressed Oops with base: 0x000000010afff000
Hello, oops!

But not if you're over the threshold:

14:03:12 [jason:~/src/oops] $ java -Xmx32g -XX:+UseCompressedOops -XX:+UnlockDiagnosticVMOptions -XX:+PrintCompressedOopsMode Oops
Java HotSpot(TM) 64-Bit Server VM warning: Max heap size too large for Compressed Oops
Hello, oops!
14:03:12 [jason:~/src/oops] $

You can also see it on with:

14:06:51 [jason:~/src/oops] $ java -Xmx30g -XX:+UseCompressedOops -XX:+PrintFlagsFinal Oops | grep Oops
     bool UseCompressedOops                        := true            {lp64_product}

And by default:

14:06:52 [jason:~/src/oops] $ java -Xmx30g -XX:+PrintFlagsFinal Oops | grep Oops
     bool UseCompressedOops                        := true            {lp64_product}

But not if we go over the limit:

14:07:43 [jason:~/src/oops] $ java -Xmx32g -XX:+PrintFlagsFinal Oops | grep Oops
     bool UseCompressedOops                         = false           {lp64_product}     

Even if we specify the flag:

14:08:23 [jason:~/src/oops] $ java -Xmx32g -XX:+UseCompressedOops -XX:+PrintFlagsFinal Oops | grep Oops
Java HotSpot(TM) 64-Bit Server VM warning: Max heap size too large for Compressed Oops
     bool UseCompressedOops                        := false           {lp64_product}  

In short, I don't think there's anything to do here.

@nik9000
Copy link
Member

nik9000 commented Aug 28, 2015

In short, I don't think there's anything to do here.

Can we get at UseCompressedOops from Java or is that lost to us?

@jasontedor
Copy link
Member

Can we get at UseCompressedOops from Java or is that lost to us?

It can be obtained from a management bean on the OpenJDK line of JVMs.

The IBM JVM has a similar concept (compressed pointers below 25 GB heaps) but uses a different flag. I'm not familiar enough with that VM to know kind of introspection is available at runtime (I suspect it's possible, I just don't know).

That said, I'm just not sure if this is a path that we should go down.

@nik9000
Copy link
Member

nik9000 commented Aug 31, 2015

That said, I'm just not sure if this is a path that we should go down.

The trouble with the path we are on is that this is one of the first things you'll want to know when helping people with high memory usage issues. Its not required by any means but its kind of "advanced" not to use it. I mean, its one of those things that you want to have tested and profiled before doing in a high traffic environment.

I could see us adding a setting that stopped elasticsearch if it couldn't verify that compressed oops was enabled with a helpful message. The user could disable the setting in elasticsearch.yaml and go on but they'd have to intentionally do it.

That intentionality is what I'm looking for - right now compressed oops is something we think we rely on but we never test. We even wave our hands at the border. "Max heap is 30GB", "max heap is 31.5GB", stuff like that. If we had a error or even a warning if we went over the compressed oops boundary then we wouldn't have to be so hand wavy.

@rmuir
Copy link
Contributor

rmuir commented Aug 31, 2015

That intentionality is what I'm looking for - right now compressed oops is something we think we rely on but we never test.

What relies on this: where are jazillions of objects being created? In most cases at least lucene datastructures are using large byte[] or addressing into memory mapped files.

@jasontedor
Copy link
Member

That intentionality is what I'm looking for - right now compressed oops is something we think we rely on but we never test.

Are you sure that this is correct?

To be clear, I think that we merely recommend it as a performance optimization and to avoid heaps getting too large increasing the likelihood of long-running garbage collection cycles destroying the cluster. But I don't see that as a reason to completely prevent users that think they need very large heaps from having them. And given that, I don't see the need to add configuration complexity to Elasticsearch so that UseCompressedOops being disabled is prevented by default.

@nik9000
Copy link
Member

nik9000 commented Aug 31, 2015

What relies on this: where are jazillions of objects being created? In most cases at least lucene datastructures are using large byte[] or addressing into memory mapped files.

I haven't dug into it. Its an old, old bit of advice. Honestly @pickypg will probably know better than I what happens without it.

That intentionality is what I'm looking for - right now compressed oops is something we think we rely on but we never test.
Are you sure that this is correct?

I am 100% sure its recommended.

To be clear, I think that we merely recommend it as a performance optimization and to avoid heaps getting too large increasing the likelihood of long-running garbage collection cycles destroying the cluster. But I don't see that as a reason to completely prevent users that think they need very large heaps from having them.

That is a good point - if its really just a question of GC pauses then Elasticsearch shouldn't bother checking the flag.

@kimchy
Copy link
Member

kimchy commented Aug 31, 2015

Historically, I recommended heaps smaller than 30gb in order to get compression since both ES and Lucene ended up allocating a lot of objects (and non paged ones). It also served as a relatively safe number for not going crazy when it comes to heap size and GC.

Now, many of the data structures, in both Lucene and ES, are properly paged and "bulked", and with doc values, this starts to become less of a concern. I wonder how much compressed ops matter now, compared to 2 years ago.

In general, to me, less than 30gb has been a good number for long GC now days (G1 excluded), so I don't think we need to add this as a flag. We did a lot to improve memory in ES and Lucene, wondering how things like doc values by default, and future improvement will end up our recommendation for heap sizes. For now, it is still safe to say max is 30gb, as we learn more.

@jasontedor
Copy link
Member

It seems that we've reached some consensus that we shouldn't add this. I'm going to close this issue unless someone thinks otherwise?

@nik9000
Copy link
Member

nik9000 commented Aug 31, 2015

It seems that we've reached some consensus that we shouldn't add this. I'm going to close this issue unless someone thinks otherwise?

Fine by me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants