Skip to content
This repository has been archived by the owner on Aug 2, 2022. It is now read-only.

EOS multi-host multi-node environment---(net_plugin.cpp:2546 handle_message) && (net_plugin.cpp:2133 operator()) && (net_plugin.cpp:1010 operator() Error sending)problem #5325

Closed
amActiveHello opened this issue Aug 20, 2018 · 3 comments

Comments

@amActiveHello
Copy link

amActiveHello commented Aug 20, 2018

1、when i run nodes in 4 hosts, eos1 as produce, the other 3 as recevier:
eos1:172.20.22.137 ll-B85M-DS3H-A
eos2: 172.20.22.128 u-Inspiron-3847
eos3:172.20.22.153 xue-To-be-filled-by-O-E-M
eos4:172.20.22.162 uu-Lenovo
2、eosio1's nodeos config:

bnet-endpoint = 172.20.22.137:4321    

//for communicatin with cleos
http-server-address = 172.20.22.137:9800  
  
//for sync block
p2p-listen-endpoint = 172.20.22.137:9900 
p2p-peer-address = 172.20.22.128:9900
p2p-peer-address = 172.20.22.162:9900
p2p-peer-address = 172.20.22.153:9900

agent-name = "EOS Test Agent"
enable-stale-production = true

producer-name = eosio

//producer key,get by use"cleos ceate key"
private-key =["EOS8Znrtgwt8TfpmbVpTKvA2oB8Nqey625CLN8bCN3TEbgx86Dsvr", "5K463ynhZoCDDa4RDcr63cUwWLTnKqmdcoTKTHBjqoKfv4u5V7p"]

unlock-timeout = 90000

//load plugin
plugin = eosio::chain_api_plugin
plugin = eosio::history_api_plugin
plugin = eosio::chain_plugin
plugin = eosio::history_plugin
plugin = eosio::net_plugin
plugin = eosio::net_api_plugin 

3、eos2 nodeos's config :

bnet-endpoint = 172.20.22.128:4321    

//for communicatin with cleos
http-server-address = 172.20.22.128:9800  
  
//for sync block
p2p-listen-endpoint = 172.20.22.128:9900 
p2p-peer-address = 172.20.22.137:9900
p2p-peer-address = 172.20.22.162:9900
p2p-peer-address = 172.20.22.153:9900

agent-name = "EOS eosio2 Agent"
enable-stale-production = true

producer-name = eosio2

//producer key,get by use"cleos ceate key"
private-key =["EOS8Znrtgwt8TfpmbVpTKvA2oB8Nqey625CLN8bCN3TEbgx86Dsvr", "5K463ynhZoCDDa4RDcr63cUwWLTnKqmdcoTKTHBjqoKfv4u5V7p"]

unlock-timeout = 90000

//load plugin
plugin = eosio::chain_api_plugin
plugin = eosio::history_api_plugin
plugin = eosio::chain_plugin
plugin = eosio::history_plugin
plugin = eosio::net_plugin
plugin = eosio::net_api_plugin 

eos3 and eos4 nodeos config is the same like eos2.
4、when i start eosio1 nodeos as produce ,eos2 -eos4 as receiver:
eosio 1 nodeos prompt:

2018-08-20T02:29:18.338 thread-0 net_plugin.cpp:2133 operator() ] Error reading message from xue-To-be-filled-by-O-E-M:9900 - 5b16a7f: Connection reset by peer
2018-08-20T02:36:02.685 thread-0 net_plugin.cpp:741 connection ] accepted network connection
2018-08-20T02:36:02.703 thread-0 net_plugin.cpp:2133 operator() ] Error reading message from xue-To-be-filled-by-O-E-M:9900 - 5b16

eos2 prompt:

018-08-20T02:25:54.810 thread-0 net_plugin.cpp:1010 operator() ] Error sending to peer xue-To-be-filled-by-O-E-M:9900 - 5b16a7f: Connection reset by peer
2018-08-20T02:25:54.810 thread-0 net_plugin.cpp:2133 operator() ] Error reading message from connecting client: Operation canceled
eos3 nodeos prompt: :
018-08-20T02:26:18.685 thread-0 net_plugin.cpp:2546 handle_message ] block_validate_exception accept block #3136 syncing from ll-B85M-DS3H-A:9900 - 0ceff0b
2018-08-20T02:26:18.685 thread-0 net_plugin.cpp:2133 operator() ] Error reading

message from connecting client: Bad file descriptor
2018-08-20T02:40:11.292 thread-0 net_plugin.cpp:2546 handle_message ] block_validate_exception accept block #38 syncing from 172.20.22.162:9900 - 4b44c0e
2018-08-20T02:40:11.292 thread-0 net_plugin.cpp:2133 operator() ] Error reading message from 172.20.22.162:9900: Bad file descriptor
2018-08-20T02:40:11.293 thread-0 net_plugin.cpp:2546 handle_message ] block_validate_exception accept block #38 syncing from ll-B85M-DS3H-A:9900 - 4a9e7ef
2018-08-20T02:40:11.293 thread-0 net_plugin.cpp:2133 operator() ] Error reading message from 172.20.22.137:9900: Bad file descriptor
2018-08-20T02:40:11.295 thread-0 net_plugin.cpp:2546 handle_message ] block_validate_exception accept block #38 syncing from 172.20.22.128:9900 - 1ba4ea6
2018-08-20T02:40:11.295 thread-0 net_plugin.cpp:2133 operator() ] Error reading message from 172.20.22.128:9900: Bad file descriptor

eos4 nodeos:

2018-08-21T02:26:44.047 thread-0 net_plugin.cpp:1914 connect ] host: 172.20.22.153 port: 9900
2018-08-21T02:26:44.059 thread-0 net_plugin.cpp:2133 operator() ] Error reading message from xue-To-be-filled-by-O-E-M:9900 - 5b16a7f: Connection reset by peer
2018-08-21T02:27:07.607 thread-0 net_plugin.cpp:741 connection ] accepted network connection
2018-08-21T02:27:07.624 thread-0 net_plugin.cpp:2133 operator() ] Error reading message from xue-To-be-filled-by-O-E-M:9900 - 5b16a7f: Connection reset by peer
2018-08-21T02:27:14.048 thread-0 net_plugin.cpp:1914 connect ] host: 172.20.22.153 port: 9900
2018-08-21T02:27:14.060 thread-0 net_plugin.cpp:2133 operator() ] Error reading message from xue-To-be-filled-by-O-E-M:9900 - 5b16a7f: Connection reset by peer
4、When I Ping the other 3 hosts in one host is connectable.
eos1:

sec@ll-B85M-DS3H-A:/rgh/eos_multi$ ping 172.20.22.128
PING 172.20.22.128 (172.20.22.128) 56(84) bytes of data.
64 bytes from 172.20.22.128: icmp_seq=1 ttl=64 time=0.280 ms
64 bytes from 172.20.22.128: icmp_seq=2 ttl=64 time=0.277 ms
^C
--- 172.20.22.128 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.277/0.278/0.280/0.016 ms
sec@ll-B85M-DS3H-A:
/rgh/eos_multi$ ping 172.20.22.153
PING 172.20.22.153 (172.20.22.153) 56(84) bytes of data.
64 bytes from 172.20.22.153: icmp_seq=1 ttl=64 time=0.278 ms
64 bytes from 172.20.22.153: icmp_seq=2 ttl=64 time=0.266 ms
^C
--- 172.20.22.153 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.266/0.272/0.278/0.006 ms
sec@ll-B85M-DS3H-A:~/rgh/eos_multi$ ping 172.20.22.162
PING 172.20.22.162 (172.20.22.162) 56(84) bytes of data.
64 bytes from 172.20.22.162: icmp_seq=1 ttl=64 time=0.268 ms
64 bytes from 172.20.22.162: icmp_seq=2 ttl=64 time=0.253 ms
^C
--- 172.20.22.162 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.253/0.260/0.268/0.017 ms

eos2:

u@u-Inspiron-3847:/eos2_multi$ ping 172.20.22.137
PING 172.20.22.137 (172.20.22.137) 56(84) bytes of data.
64 bytes from 172.20.22.137: icmp_seq=1 ttl=64 time=0.178 ms
64 bytes from 172.20.22.137: icmp_seq=2 ttl=64 time=0.222 ms
^C
--- 172.20.22.137 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.178/0.200/0.222/0.022 ms
u@u-Inspiron-3847:
/eos2_multi$ ping 172.20.22.153
PING 172.20.22.153 (172.20.22.153) 56(84) bytes of data.
64 bytes from 172.20.22.153: icmp_seq=1 ttl=64 time=0.289 ms
64 bytes from 172.20.22.153: icmp_seq=2 ttl=64 time=0.291 ms
^C
--- 172.20.22.153 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.289/0.290/0.291/0.001 ms
u@u-Inspiron-3847:~/eos2_multi$ ping 172.20.22.162
PING 172.20.22.162 (172.20.22.162) 56(84) bytes of data.
64 bytes from 172.20.22.162: icmp_seq=1 ttl=64 time=0.294 ms
64 bytes from 172.20.22.162: icmp_seq=2 ttl=64 time=0.305 ms
^C
--- 172.20.22.162 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.294/0.299/0.305/0.018 ms

eos3:

xue@xue-To-be-filled-by-O-E-M:/eos3_multi$ ping 172.20.22.137
PING 172.20.22.137 (172.20.22.137) 56(84) bytes of data.
64 bytes from 172.20.22.137: icmp_seq=1 ttl=64 time=0.251 ms
64 bytes from 172.20.22.137: icmp_seq=2 ttl=64 time=0.245 ms
^C
--- 172.20.22.137 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.245/0.248/0.251/0.003 ms
xue@xue-To-be-filled-by-O-E-M:
/eos3_multi$ ping 172.20.22.128
PING 172.20.22.128 (172.20.22.128) 56(84) bytes of data.
64 bytes from 172.20.22.128: icmp_seq=1 ttl=64 time=0.267 ms
64 bytes from 172.20.22.128: icmp_seq=2 ttl=64 time=0.298 ms
^C
--- 172.20.22.128 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.267/0.282/0.298/0.022 ms
xue@xue-To-be-filled-by-O-E-M:~/eos3_multi$ ping 172.20.22.162
PING 172.20.22.162 (172.20.22.162) 56(84) bytes of data.
64 bytes from 172.20.22.162: icmp_seq=1 ttl=64 time=0.182 ms
64 bytes from 172.20.22.162: icmp_seq=2 ttl=64 time=0.202 ms
^C
--- 172.20.22.162 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.182/0.192/0.202/0.010 ms

eos4:

uu@uu-Lenovo:~/eos4_multi$ ping 172.20.22.137
PING 172.20.22.137 (172.20.22.137) 56(84) bytes of data.
64 bytes from 172.20.22.137: icmp_seq=1 ttl=64 time=0.241 ms
64 bytes from 172.20.22.137: icmp_seq=2 ttl=64 time=0.250 ms
^C
--- 172.20.22.137 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.241/0.245/0.250/0.016 ms
uu@uu-Lenovo:~/eos4_multi$ ping 172.20.22.128
PING 172.20.22.128 (172.20.22.128) 56(84) bytes of data.
64 bytes from 172.20.22.128: icmp_seq=1 ttl=64 time=0.258 ms
64 bytes from 172.20.22.128: icmp_seq=2 ttl=64 time=0.277 ms
^C
--- 172.20.22.128 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.258/0.267/0.277/0.018 ms
uu@uu-Lenovo:~/eos4_multi$ ping 172.20.22.153
PING 172.20.22.153 (172.20.22.153) 56(84) bytes of data.
64 bytes from 172.20.22.153: icmp_seq=1 ttl=64 time=0.193 ms
64 bytes from 172.20.22.153: icmp_seq=2 ttl=64 time=0.201 ms
^C
--- 172.20.22.153 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.193/0.197/0.201/0.004 ms
@amActiveHello amActiveHello changed the title EOS multi-host multi-node environment---(net_plugin.cpp:2546 handle_message) && (net_plugin.cpp:2133 operator()) problem EOS multi-host multi-node environment---(net_plugin.cpp:2546 handle_message) && (net_plugin.cpp:2133 operator()) && (net_plugin.cpp:1010 operator() Error sending)problem Aug 20, 2018
@taokayan
Copy link
Contributor

You can't have enable-stale-production = true in the receiver nodes. In your case, every node is producing independently, which is the reason not accepting each other's block.

@amActiveHello
Copy link
Author

I have set the other nodes to“enable-stale-production = false”: but still reported the same error.

@Gshelterm
Copy link

I got the same error, have you sloved the problem?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants