-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RAM size increase on using RocksDB Java #5880
Comments
Can you obtain a heap profile? And is the DB size constant? |
You should set max_open_files=100 or something like that. Otherwise it is unbounded. RocksDB is keeping indices of all opened files in memory. |
@koldat Thank you very much, will change the number and update you the result. |
@koldat after reducing the number of MAX_OPEN_FILES to 20, still the RAM keeps on increasing. |
You can take a look on some of my tips in #4112 There are many reason and I posted what we use to stop memory growing. Key is:
|
@Aravindhan1995 I also had a similar problem but after following @koldat 's suggestions and other issues reported here about memory usage helped me a lot to find a solution. This is what i did after analyzing the memory usage and it lead to memory usage reduction without changing the application or the storage level change.
(what i have tried and found was setting max_open_files will aggressively reduce the index memory usage and introduce more latency, if you have a latency sensitive application, then it may not be applicable, YMMV) Before doing anything, measure the rocksdb memory usage periodically and log it to a file and analyze which component is using more memory and then decide about the solution. You can use following methods in RocksDB object to collect statistics long sharedBlockCacheUsage = Long.parseLong(rocksDb.property("rocksdb.block-cache-usage"));
long memTableUsage = Long.parseLong(rocksDb.property("rocksdb.size-all-mem-tables"));
long tableReaderUsage = Long.parseLong(rocksDb.property("rocksdb.estimate-table-readers-mem")); |
Is |
hello! I also encountered this problem, how did you solve it in the end? |
@koldat I see that you are having heap memory issues as well. Our heap with our kafka streams app keeps on growing running in python and using centos. Our settings are How are you able to get it to work? |
Hi @jestan, sorry to revive an old thread. We are facing the same issue and want to try jemalloc but don't know how. There are so few users of the Java RocksDB library that information on how to do this is non existent. Your information could save weeks of frustration for all future developers unfortunate enough to pick the combo of RocksDB + Java. |
@macduy If you are compiling RocksJava on Linux and you have jemalloc library installed it should be detected and support will be compiled in. |
@macduy To use jemalloc you do not need to compile it in. We simply use LD_PRELOAD to replace default allocator. Even Java is using jemalloc after that. export LD_PRELOAD=/usr/local/jemalloc5.2.1/lib/libjemalloc.so.2 This is what we use in production very long time. Memory is very stable after that. I tried many allocators and jemalloc is simply the best. |
BTW my comment: Jemalloc team have found issues together with Java team so it is fixed (a year). Just use latest Java builds. |
Thanks @koldat , I'm new to this but it looks like this will help us get RocksDB working on Java in a stable manner without having to compile Rocks ourselves. |
According to facebook/rocksdb#5880 (comment), we just need to pre-load it instead of linking during build time
@areyohrahul we ended up just setting We were going to try switching to jemalloc but we found that the above was sufficient to solve the problem and we didn't dig any deeper. |
Thanks @macduy for responding, I'll give it a try. |
@macduy I tried using this setting, one quick question. Did your memory leak stop completely? or was it delayed by a very long time using this setting? |
I can't answer that with 100% certainty. What I know is it reduced our crash rate due to OOM massively (from daily to about once a month) but I cannot conclusively say whether the memory leaks have stopped. |
Got it, thanks. |
I am using RocksDB as Queue for my application, The load is 5000 inserts and deletes per minute, and continuous read using rocks iterator. The RAM size increases over time.
Configuration Details:
JVM RAM: 2GB
DB WRITE BUFFER SIZE: 256 MB
Each day 50-100 MB of RAM increases for the process. I am closing all the rocks objects, I am not sure what is the reason for the increase in memory.
The text was updated successfully, but these errors were encountered: