-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Peer discovery issue - Error sending status request #8933
Comments
Duplicates #7531 |
I updated parity to 1.11.4 both locally and on the remote node, but get the same error. Here's the new log |
as you open each node individually and delete the nodes.json from the chains directory |
I can't test it in full because I don't have access to majority of nodes of our network. I'm also not sure that deleting nodes.json files on old nodes is a good idea, since those might be the only correct ones. I upgraded two nodes to 1.11.4. On one of these two, I completely deleted parity data folder (including nodes.json) and resynced it, with a fresh enode. It now only has a single peer, and it's not the second node. It'd be good if someone could clarify what does this error ( |
This Error appears here: https://github.com/paritytech/parity/blob/2060ea5de388f7b3b515ba5c2d58425585a8ac1e/ethcore/sync/src/chain/handler.rs#L128-L136 It actually looks like you have connectivity issues as it means that you'll disconnect from the peer. How many nodes do you have on the network? |
In total there are about 60 nodes, we keep a list of 12 of them to use as |
It can take a while to connect to all nodes, but the reserved-peers should definitely connect quickly. Alternatively, you could try to sync with |
This is basically what it did - I started with 0 peers and add a single peer as reserved peer. Link to logs are above. |
Is this still an issue with the recent discovery fixes? @phahulin lmk if this still persists in the recent stable and you need this reopened |
I'm seeing this issue on Parity nodes that have been restored from an S3 backup. The node connects to 1 out of 7 peers in total. These nodes only serve as API nodes for dApps and therefore have no credentials attached to them. We're running a proof-of-authority chain. Procedure:
Parity version:
Stacktrace:
Config
|
Update 1:It turns out that the network key which is responsible for generating the enode was the same as the one from the backed up node. Once I deleted it and it generated a new enode the peer count went up to 2. I'm not sure if that's related to this problem. However it doesn't go beyond 2 and the stacktrace is still being logged.
Update 2:I stopped Parity and deleted the below files. After that Parity immediately synced to the reserved peers.
The content of
|
Closing issue due to its stale state. |
we run a poa-based network and I noticed that recently new nodes joining the network have very low peers count. This might be related to our security upgrade from 1.9.2 to 1.10.6.
"Old" nodes (the ones launched some time ago) maintain high peers count (30-40). However, when a new node joins the network peers count stays low (3-5). It appears that some of the old nodes no longer can be synced from while others still can, I don't see what differentiates them, all old nodes are in sync and look similar.
I did the following test: started parity locally on my laptop with correct genesis file but without any peers:
Waited for
0/25 peers
message to appear in logs. Then I added one of the old nodes as reserved peer(I changed the IP address here for security purposes). And still there were
0/25 peers
.Here is the log, I see an error message
The text was updated successfully, but these errors were encountered: