You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently evaluating ELK for my company. I played successfully with logstash-2.0.0-betaX and elasticsearch-2.0.0-betaX and upgraded to elasticsearch-2.0.0-rc1 a few days ago.
Since then, I get a "java.lang.OutOfMemoryError: GC overhead limit exceeded" after a few minutes of elasticsearch running.
Here is my setup:
ELK running on windows server 2012 with 8GB Ram
logstash-2.0.0-beta3 indexing log files (accessible through network shared file) and logs in oracle databases (logstash-input-jdbc)
The exact same set up with elasticsearch-2.0.0-beta1 works fine.
Here is the complete stacktrace:
[2015-10-27 15:07:15,107][INFO ][monitor.jvm ] [node_test_dc_1] [gc][old][489][24] duration [5.3s], collections [1]/[5.4s], total [5.3s]/[1.5m], memory [1.5gb]->[1.5gb]/[1.9gb], all_pools {[young] [243.1mb]->[245.5mb]/[268.5mb]}{[survivor] [0b]->[0b]/[205mb]}{[old] [1.3gb]->[1.3gb]/[1.3gb]}
[2015-10-27 15:09:11,562][INFO ][monitor.jvm ] [node_test_dc_1] [gc][old][521][62] duration [7s], collections [1]/[7s], total [7s]/[3.4m], memory [1.5gb]->[1.5gb]/[1.9gb], all_pools {[young] [258.3mb]->[259.7mb]/[268.5mb]}{[survivor] [0b]->[0b]/[205mb]}{[old] [1.3gb]->[1.3gb]/[1.3gb]}
[2015-10-27 15:11:52,501][WARN ][index.engine ] [node_test_dc_1] [etl-2015.10.14][0] Failed to close SearcherManager
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1563)
at java.lang.Class.checkPackageAccess(Class.java:2372)
at java.lang.Class.checkMemberAccess(Class.java:2351)
at java.lang.Class.getMethod(Class.java:1783)
at org.apache.lucene.store.MMapDirectory$2$1.run(MMapDirectory.java:289)
at org.apache.lucene.store.MMapDirectory$2$1.run(MMapDirectory.java:286)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.lucene.store.MMapDirectory$2.freeBuffer(MMapDirectory.java:286)
at org.apache.lucene.store.ByteBufferIndexInput.freeBuffer(ByteBufferIndexInput.java:378)
at org.apache.lucene.store.ByteBufferIndexInput.close(ByteBufferIndexInput.java:357)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:96)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:83)
at org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.close(Lucene50CompoundReader.java:120)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:96)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:83)
at org.apache.lucene.index.SegmentCoreReaders.decRef(SegmentCoreReaders.java:152)
at org.apache.lucene.index.SegmentReader.doClose(SegmentReader.java:169)
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:253)
at org.apache.lucene.index.StandardDirectoryReader.doClose(StandardDirectoryReader.java:359)
at org.apache.lucene.index.FilterDirectoryReader.doClose(FilterDirectoryReader.java:134)
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:253)
at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:130)
at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:58)
at org.apache.lucene.search.ReferenceManager.release(ReferenceManager.java:274)
at org.apache.lucene.search.ReferenceManager.swapReference(ReferenceManager.java:62)
at org.apache.lucene.search.ReferenceManager.close(ReferenceManager.java:146)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:96)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:83)
at org.elasticsearch.index.engine.InternalEngine.closeNoLock(InternalEngine.java:954)
at org.elasticsearch.index.engine.Engine.failEngine(Engine.java:517)
at org.elasticsearch.index.engine.Engine.maybeFailEngine(Engine.java:556)
at org.elasticsearch.index.engine.InternalEngine.maybeFailEngine(InternalEngine.java:886)
Heap dump analyze:
The text was updated successfully, but these errors were encountered:
I'm currently evaluating ELK for my company. I played successfully with logstash-2.0.0-betaX and elasticsearch-2.0.0-betaX and upgraded to elasticsearch-2.0.0-rc1 a few days ago.
Since then, I get a "java.lang.OutOfMemoryError: GC overhead limit exceeded" after a few minutes of elasticsearch running.
Here is my setup:
elasticsearch.yml configuration major changes:
ES_HEAP_SIZE
The exact same set up with elasticsearch-2.0.0-beta1 works fine.
Here is the complete stacktrace:
Heap dump analyze:
The text was updated successfully, but these errors were encountered: