-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Extremely slow blockchain sync due to HDD high load #761
Comments
geth has the same issue: I0314 15:25:15.450172 core/blockchain.go:1070] imported 1 blocks, 11 txs ( 1.474 Mg) in 1m37.881s ( 0.015 Mg/s). #2333527 [b8969b19…] |
Related issue in geth client: |
@vikulin If you have time would be great if you could try with latest If that wouldn't help - we will need to similar reports. For me regular sync takes normal time. |
@cupuyc , can you please point to the fix number? |
Happened in #762 |
The same result. Looks like there is no performence improvement since last update. |
|
@cupuyc , this is definitely ethereumJ issue - when I exit from ethereumJ syncing by CTRL+C my HDD usage goes to 0%. |
@cupuyc , also my hardware is capable to do such work: 6GB RAM, SATA disk, Phenom II x4 950. |
Are you syncing from blank database each time or do you have some database backup? |
I did not do any backups. All blockchain data I fetched from scratch. |
@vikulin Hi! |
@zilm13 , I'm running 1.5.0 latest develop. Also thank you for clarification. As I know these operation in blocks should not execute because past blockchain data is static. Why attackers' blocks have so heavy impact for Sync? |
@vikulin during regular sync we execute all transactions for verification and it's the only way to calculate state changes from this kind of data. Also it provides an opportunity for developers to add their own listeners and perform their own logic using results of execution. |
@zilm13 , can the ethereumj determine what transactions generated by attackers and exclude it from execution process in future? |
@vikulin If you don't want to replay transactions, you could just use fast sync |
@vikulin It will include fuzzy logic which could hurt normal transactions. These transactions are correct, just there were conditions to DoS clients with transactions before HF. You could learn more about changes made in Ethereum to make such attacks expensive in EIP 150. Also after HF there are special transactions removing empty accounts data, one by one, and we should keep correctness of network state on each block over wish to make it faster. Anyway we have fastsync today and it covers 99% use cases. For others these attacks are permanent for network life. |
@cupuyc , @zilm13 . I checked sync again - I see for fast sync=false: 04:18:21.668 INFO [sample] Blockchain sync in progress. Last imported block: #2430054 (9cbd54 <~ 694bba) Txs:17, Unc: 1 (Total: txs: 83, gas: 2119k) With fast sync - it works with the same speed if I do not continue from scratch: |
leveldb seems really slow. I changed to use: Edit: might need to have some adjustment to the default leveldb caching params? I assume inmem means I would have to rebuild my chain every restart. |
still finding optimal values, but so far:
seems to work pretty well when using a jvm heap of: I am not using fast sync, and am conducting tests on an m4.large with an EFS volume for the blockchain on Amazon Web Services. Is there a guide/documentation/somewhere on what the proportions I should be using for stateCacheSize, blockQueueSize, headerQueueSize, and maxStateBloomSize in relation to writeCacheSize? It seemed like despite keeping the sum of these values under the writeCacheSize I get heap out of memory errors, even though my writeCacheSize = 6144, under my heap size of 7500. |
Hi guys, New contributor , my first post. Hope this is the correct place to ask. The issue is that creating and synchronising the blockchain for the first time is progressively slowing down, from 16329 blocks per minute during the first hour to 10406 blocks per hour after 4 hours. The same pattern have been reported previously by Geth users, see https://www.reddit.com/r/ethereum/comments/5e9q0i/geth_is_very_slow_to_sync_after_block_2420000/ . And in fact the process became very slow around the same blocks. Hoping to reset something without killing the process I turned off the router but left the program running and went to sleep. Few hours later I turned the router back on. During few hours nothing new was added to the chain; but as soon as I turned the router back online it started downloading and synchronising new block. It was a bit faster than before but not by much. 13289 blocks per hour. There are roughly 1505379 remaining , so at this speed will take more than 4 days to finish assuming it does not slow down more.. I am saving all the output of this process, one line of text every 10 seconds , if this can be used by anyone to retrace what is going on to improve it just ask me. As mentioned this is my first attempt with Ethereum and do not really know much about anything yet, any help, hints, advise is appreciated. I plan to install and work with the C++ and Java versions of Ethereum and followed the instructions here https://github.com/ethereum/ethereumj: Running from command line:
The machine is a Macbook Pro with 16 Gb Ram, 4 cores and a new 850 Gb SSD drive. How can stop everything without losing what is already done ? Thank you |
Hello !
I dont understand why a flush on a SSD take more than 15min, There is only 256MB to flush, it should be done in less than one second on any SSD. (by creating a memory representation of the file if needed, then flushing it) sync.fast.enabled = false |
I have a similar problem and did some digging. I now know what part of the code is slow but unfortunately not why or how to speed it up. I'm posting this in hopes that it will help someone else figure the rest out. Note that while I also saw slowness starting around block 2,400,000, I didn't run these tests until around block 2,850,000. So it is possible that the problems are different than those reported by @vikulin. TL;DRFlushing the DetailsLooking at the output, it seems that it is the flush process that is going slow: It is so slow that flushes fail to complete before the next flush is triggered. This results in the next flush having to wait: Looking at the
Profiling with VisualVM. It appears that most of the time is being spent in the
Running From reading the source code it looks like
Based on these results, I set the following configurations, but it did not seem to help:
Summary
|
@vbarzov @adamsmd We are currently switching to new DB (RocksDB) , new release will be with it instead of the LevelDB. This could help but, anyway, blockchain synchronisation is very IOPS constrained, so you'd better run it on SSD. I can only recommend to turn pruning off ( |
And not a HBase ? |
Resolved in 1.7.0 |
ethereumj v 1.5.0 has been built recently from develop branch.
The text was updated successfully, but these errors were encountered: