-
-
Notifications
You must be signed in to change notification settings - Fork 655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit RocksDB memory requirements #43
Comments
I can suggest disabling auto-compactions and using smaller values for block_size and max_open_files (I had similar issue at romanz/electrs#30). |
Thank you very much for your suggestions. I am going to try some of them. I am afraid, with disabled auto-compactions the DB size will grow very much as we are overwriting data in the database. As I understand the rockdb documentation, with larger values of block_size one should get better memory footprint. You are certainly using very large block size (512kB) in comparison to our 16kB or 32kB. After I experiment with the options, I will post my findings here. |
I was able to sync blockbook from scratch for GRS mainnet on 2GB VPS by restarting daemon every minute:
Without restarts it was killed due to OOM after several minutes. So my guess is that GC happens not often enough, thus resulting in much higher memory requirements during initial sync. I suggest to force GC after every 1000 or 10k blocks to reduce memory requirements. |
Not that easy. I've tried doing |
A good test of graceful shutdown procedure :) Actually, there are probably better ways how to reduce memory footprint of the initial sync. Unfortunately, we did not have time to document them yet.
|
Martin, thank you, these options helped to reduce memory usage. Still blockbook caches enough to force me restart it once in the middle of the sync to prevent OOM killer. |
It is probably not Blockbook itself but Rocksdb, who is taking memory. This is exactly why this issue exists, we are not able to control memory usage of Rocksdb as much as we would like. |
If it is only the initial synchronization, is it a good solution to make swapfile? |
This issue is really fatal, I have 16 GB RAM, 4 cores and it always fails anyway.
I |
16 GB RAM should be more than enough for Litecoin. Is it really a memory problem? Have you tried to run the inital import with settings mentioned in this comment? Especially the flag |
I can confirm, that parameters |
I did: but still showing 1 worker
|
Hi, the workers are in the same process as goroutines, you cannot see them using |
We should give a try to this: https://blog.cloudflare.com/the-effect-of-switching-to-tcmalloc-on-rocksdb-memory-use/ It basically explain all our issues, including why having only one worker makes the memory footprint much better. |
There is now an option to build Blockbook with TCMalloc, see 212b767 and in the documentation. |
Does anybody have experience with hardware requirements during initial sync of bcashsv? It's "supported", as in there is a config file, but since trezor does not seem to run an instance, I'm not sure if there is actually experience with stability and resource requirements in the presence of 1GB blocks. |
How much memory is required to sync Bitcoin and how long does it take? |
Hi, with the bulk import, the memory required is at least 32GB and will take about 24-36hours. The advantage of running without bulk import mode is that even if you run out of memory and the process is killed, you can just restart it and it will continue (with bulk import you have to start from the beginning). If your computer is memory restricted, I would opt for the non bulk mode. |
Especially during initial index import the RocksDB memory usage of Blockbook is large and unpredictable. Find a way (options) that will make the memory usage limited.
The text was updated successfully, but these errors were encountered: