-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Can't keep peer count above min #7531
Comments
+1 same with kovan testnet, even after closing parity my machine is unusable and have to restart it |
12 hrs later, it is still going down below min peers (3) |
I am experiencing the same. |
did a db kill, resyncing, will report back |
db kill seems to help so far (it's remains close to 250 max peers). |
I delete nodes.conf every time I restart Parity - helps initially but starts dropping peers soon after. |
i have noticed that peer count stays to some "reasonable" numbers while syncing but immediately drops when in sync. is there something that would explain or help with this? /edit: this was on a clean sync started this morning on 1.8.6 cheers
|
I also had parity crash for the 1st time last night
i started it again this morning and so far it is running. |
thx @5chdn. i just restarted and it hasnt crashed since.
|
Can anyone confirm if this is a problem running 1.8.6 on a clean database (i.e. not created from a previous parity version)? We increased the number of threads that perform background compactions, and on old databases I think the heavy compactions are starving the other threads. |
@andresilva this is still a problem in 1.8.6 on a clean database. |
I have the same issue with the current source. #7556 |
I have this issue with 1.7.13 on a fresh database (and fresh nodes.json), so whatever is causing this must have been around for a while Edit, and a 1.9.0 node on kovan |
issue resolved, at least for me. |
I am seeing the same issue on a fully synced ETC database. I have killed the db, removed the nodes.json, etc, etc. 2018-02-07 09:43:29 Starting Parity/v1.9.2-beta-0feb0bb-20180201/x86_64-linux-gnu/rustc1.23.0 Here is the interesting issue. The same thing persists across several other Parity chains but it also is happening on my PIRL, ELLA, UBIQ daemons which are not the same. I am trying to locate if this is an issue with a recent Ubuntu update. Can I assume everyone on this issue is running Ubuntu 16.04? Brad |
Yes. Ubuntu 16.04.1 LTS |
This has been ongoing for a while now but got worse recently after I stopped all my Parity to upgrade from 1.8.6 to 1.9.2. Here is something that leads me to believe it is something with the OS. Using Parity with the ETC chain. I can start it up on an existing Ubuntu 16.04 co-lo system that has been running since September. It will run and not connect to a node for hours and then maybe gain one node. I stop it running there. Install a fresh 16.04 VM on my home machine. Copy over the chain, keys, db. Start it up on the VM, and immediately start gaining peers. Once it is at 30 or so I can then copy the nodes.json from the VM back over to my co-lo server. It will then run with the peers for hours or a few days before the peers disconnect. At which time it does not discover any further peers. This seems to be something with how geth and parity do peer discovery. I have a sync log that is 35MB. Where can I dump this? It includes the copied over nodes.json, and a fresh nodes.json where it can not discover any peers. Brad |
@Serpent6877 looks like the ETC bootnodes are down. |
Can't reproduce peering issues with 1.11.1 anymore. Can easily get 256+ peers.
@nlandeker networking issues? @c0deright how's your node now? |
I'm still running on 1.11.0 since 3 days and 3 hours and it's looking good. Peer count at 22 right now. |
@5chdn 1.11.1 appears stable so far. |
🎉 |
Is it possible to backport this patch to 1.9 branch? |
No, 1.9 is EOL. Why do you need it? |
Is it possible to get an ETA on this being incorporated into 1.10? |
With I've tried deleting Right now I'm running with:
Very interesting: after copying Also, with new version there are more errors like: "There are too many transactions in the queue. Your Please note: i'm running with The fee is at least Any idea what might be causing this two issues? Thanks. |
Because the only release that has this fix incorporated is a beta release. This fix fixes a serious issue and we're using a stable version that was released only 29 days ago and you're telling me we should use beta software to get rid of this issue? That's not how we're operating. If 1.10-stable had this fix we would think about switching. But not to 1.11 at this stage. Please reconsider backporting the fix to at least 1.10. |
I'm rolling back to |
@gituser i agree parity 1.11.1 is not usable atm. We are also using 1.10.4 and it appears to be ok so far. |
Can you create a new issue where we can discuss the tx queue issues? I'll reopen this to further investigate the peering issues. |
check this issue - #8679 |
@c0deright Did 1.11.1 solve the low peer count issues for you? I can try backporting #8541 when we do the next release but I'm not sure it solves the issue entirely (and there were also other changes made in 1.11 that help with low peer count #8530). |
@andresilva It's much much better with 1.11.0 (still running 1.11.0 to get longterm metrics). I'm running 1.11.0 for over 8 days now and I have between 22 and 24 peers constantly. It's a big improvement. Our production systems with parity 1.9.7 are running fine with the I can't confirm 1.11(.0) losing peers like others reported here recently. |
@c0deright I moved our parity from 1.11 to 1.10.4 and in a day run it has a stabled 70 peer count (min/max set to 256/1024) 1.11 has a memory leak that kills it in a way that it cannot self recover. Hope this helps. Update: dropped to 20 today (23.05). |
@andresilva This issue is fixed for me with the 1.11 branch. My test machine on 1.11 has constantly ~25 peers, not decreasing over time. The newly released 1.10.6 with #8541 backported doesn't seem to help much, as 1.10.6 peer count is ~6 only (with discovery reenabled, of course. but I just started 1.10.6 10 minutes ago. Peer count might change in a couple of hours). With 1.11 low peer count is definately gone. Edit: After 2 hours 1.10.6 still had only ~5 peers so I copied |
For me I do not use 1.11.x branch because it's very unstable. NOTE: I have Though, need to monitor over a week at least to get better perspective. |
Cool. |
I am running the "stable" branch Port 30303 is forwarded. Cache is cleared. Nodes.json is also cleared. It is mind-boggling to me how this issue has been closed when clearly it's nowhere near resolved as it is expressed here |
TildeSlashC0re - Strange because for me it has been fixed since the v11 unstable package. I don't always get 25 but often 23 or so. (I'm using ETC chain.) |
Tried the classic chain .. the problem persists. I have disabled the whisper protocol now and topped out at 18 peers at first just to drop down to 1 peer about 15 minutes later and overall the connection to peers is highly inconsistent. here's the config.toml [parity]
mode = "active"
base_path = "/chronos/.parity"
identity = "pyramid"
chain = "foundation"
[network]
min_peers = 50
max_peers = 100
[footprint]
db_compaction = "hdd"
Here's an hour worth of parity-log running on foundation chain
Update: nodecount over a timespan of two hours |
I had the peer count problem for months so I know how annoying it is. I'm not so good at [Service] User=cs [Install] WantedBy=multi-user.target |
well I'll leave it at that for now I guess... really frustrating to see that even the stable branch can't handle a persistent peer-count over a relatively short time-span. Regarding my initial comment in here it seems to have originated from having whisper protocol enabled. Update |
got a couple of betas (rc2 i think), one has 0 peers, one has 1 peer :| |
Before filing a new issue, please provide the following information.
been running 1.8.5 with no problems since it was released. peer count was always at max peers or very close to it. since installing 1.8.6 (node has been synced for months, only stopped a few seconds) parity constantly drops below min peers.
i saw the "Please note, that the initial database compaction after upgrading might temporarily reduce the node's performance." but node performance includes peers connected?
The text was updated successfully, but these errors were encountered: