-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Power8 - Monerod not syncing #2741
Comments
The default maxreaders limit is 126. How many threads are you using? |
Default? The dedi has 160 threads though. |
That make flag has nothing to do with what monerod uses at runtime. For the moment you're going to have to run with fewer threads. In fact, much fewer. Run |
Still doesn’t sync. https://hastebin.com/rojobedalo.sql |
Try the patch in PR #2742 |
I have been encountering this issue ( |
20 worked for me. Dropped from 60 to 20. Haven't tried anything else. |
This issue has been recently encountered in openmoenro in moneroexamples/openmonero#127 (comment) Thus I want to share my observation/experience as they might be useful for others who encounter the same issue in their projects. In openmonero, the issue was encountered where there were more than 120 accounts being used/imported at the same time. In this case, there would be 120 threads which would access lmdb in read only mode to tx data. This was resulting in MDB_READERS_FULL. Initially I though it is because all the threads are not synchronized and access lmdb in parallel. But after further testing and work, I found that it's about no of threads only. Wether they access lmdb in parallel or not, did not matter. Thus as long as there were 120+ readers of the lmdb (paraller or not) the error was occurring. The solution was to limit number of threads that access lmdb. So now, the 120+ don't access lmdb directly, but instead use a thread pool for that. The tread pool has limited number of threads/workers (e.g. 8) that access the lmdb on behave of the 120+ threads. So the 120+ threads submit jobs (read lmdb jobs) to the thread pool queue, and the workers in the thread execute the jobs from the queue. |
You've run into LMDB's default maxreaders setting. I don't believe we're explicitly configuring this, but you could simply add a call for that when opening the LMDB environment. |
Thanks. For now its good. Using thread pool also has other benefits, so will see how it goes. |
2017-10-30 17:17:16.156 100064abf170 WARN blockchain.db.lmdb src/blockchain_db/lmdb/db_lmdb.cpp:72 Failed to create a read transaction for the db: MDB_READERS_FULL: Environment maxreaders limit reached 2017-10-30 17:17:16.157 100061dbf170 WARN blockchain.db.lmdb src/blockchain_db/lmdb/db_lmdb.cpp:72 Failed to create a read transaction for the db: MDB_READERS_FULL: Environment maxreaders limit reached 2017-10-30 17:17:16.162 100066dbf170 WARN blockchain.db.lmdb src/blockchain_db/lmdb/db_lmdb.cpp:72 Failed to create a read transaction for the db: MDB_READERS_FULL: Environment maxreaders limit reached 2017-10-30 17:17:16.219 1000159bf170 WARN blockchain.db.lmdb src/blockchain_db/lmdb/db_lmdb.cpp:72 Failed to create a read transaction for the db: MDB_READERS_FULL: Environment maxreaders limit reached 2017-10-30 17:17:16.219 100014fbf170 WARN blockchain.db.lmdb src/blockchain_db/lmdb/db_lmdb.cpp:72 Failed to create a read transaction for the db: MDB_READERS_FULL: Environment maxreaders limit reached 2017-10-30 17:17:27.162 [P2P3] WARN blockchain.db.lmdb src/blockchain_db/lmdb/db_lmdb.cpp:72 Failed to renew a read transaction for the db: Invalid argument 2017-10-30 17:17:27.373 [P2P7] WARN blockchain.db.lmdb src/blockchain_db/lmdb/db_lmdb.cpp:72 Failed to renew a read transaction for the db: Invalid argument 2017-10-30 17:17:27.967 [P2P5] WARN blockchain.db.lmdb src/blockchain_db/lmdb/db_lmdb.cpp:72 Failed to renew a read transaction for the db: Invalid argument 2017-10-30 17:17:30.184 [P2P5] WARN blockchain.db.lmdb src/blockchain_db/lmdb/db_lmdb.cpp:72 Failed to renew a read transaction for the db: Invalid argument
Ubuntu 16.04
The text was updated successfully, but these errors were encountered: