Skip to content
This repository has been archived by the owner on Sep 26, 2019. It is now read-only.

NC-1883 - IPv6 peers #281

Merged
merged 6 commits into from
Nov 21, 2018
Merged

NC-1883 - IPv6 peers #281

merged 6 commits into from
Nov 21, 2018

Conversation

shemnon
Copy link
Contributor

@shemnon shemnon commented Nov 19, 2018

PR description

Add ipV6 to the datagram socket options, this creates a socket that handles both
ipv4 and ipv6 socket connections.

Fixed Issue(s)

NC-1883

Add ipV6 to the datagram socket options, this creates a socket that handles both
ipv4 and ipv6 socket connections.
@ajsutton
Copy link
Contributor

For the record, I've run a test with three Pantheon nodes: one as the boot node with --p2p-listen=127.0.0.1:30303 and the other two with --p2p-listen=[::1]:30304 (or 30305 for the second one). So that's a IPv4 boot node with two IPv6 peers and they all connect up successfully without this change.

I was not able to get a IPv6 boot node to work with or without this change:

2018-11-20 15:24:26.177+10:00 | vert.x-eventloop-thread-1 | WARN  | PeerDiscoveryAgent | Sending to peer DiscoveryPeer{status=bonding, endPoint=Endpoint{host='::1', udpPort=30305, getTcpPort=30305}, firstDiscovered=1542691464109, lastContacted=0, lastSeen=0} failed, packet: 0x7363306fc18eebd735ae255f5032bb560d1874bc961fd336c775b7d875d1358b3cfd20e614a049e3539c5a0124160edbd17a3548269262d2a96aa5f1ef010b51232978586bc5bc3c911683a08b81712b2ce2629dfca0f85db27ae4fee76230430101f83804d79000000000000000000000000000000001827660827660d790000000000000000000000000000000018276618276618601672f94960d
java.net.SocketException: Invalid argument
	at sun.nio.ch.DatagramChannelImpl.send0(Native Method) ~[?:1.8.0_172]
	at sun.nio.ch.DatagramChannelImpl.sendFromNativeBuffer(DatagramChannelImpl.java:521) ~[?:1.8.0_172]
	at sun.nio.ch.DatagramChannelImpl.send(DatagramChannelImpl.java:483) ~[?:1.8.0_172]
	at sun.nio.ch.DatagramChannelImpl.send(DatagramChannelImpl.java:462) ~[?:1.8.0_172]
	at io.netty.channel.socket.nio.NioDatagramChannel.doWriteMessage(NioDatagramChannel.java:291) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.nio.AbstractNioMessageChannel.doWrite(AbstractNioMessageChannel.java:142) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:875) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:362) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:842) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1321) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:802) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:814) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:831) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1041) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:300) ~[netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.vertx.core.datagram.impl.DatagramSocketImpl.doSend(DatagramSocketImpl.java:258) ~[vertx-core-3.5.0.jar:?]
	at io.vertx.core.datagram.impl.DatagramSocketImpl.lambda$send$2(DatagramSocketImpl.java:242) ~[vertx-core-3.5.0.jar:?]
	at io.vertx.core.impl.AddressResolver.lambda$null$0(AddressResolver.java:89) ~[vertx-core-3.5.0.jar:?]
	at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:344) ~[vertx-core-3.5.0.jar:?]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) [netty-common-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) [netty-transport-4.1.15.Final.jar:4.1.15.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.15.Final.jar:4.1.15.Final]

Set the IPv6 state based on the presence or absence of an IPv6 address.
@shemnon
Copy link
Contributor Author

shemnon commented Nov 20, 2018

This unbreaks the tests. However I am strongly contemplating a CLI flag (either --ipv6, --noipv6, or --forceipv6) that will either enable the detection (--ipv6), force into ipv4 only (--noipv6) or force into ipv4/ipv6 mode (--forceipv6).

The reason is that by turning on the flag we report to the CLI we bond to 0:0:0:0:0:0:0:0: instead of 0.0.0.0:. Not sure if it is a big issue or not. We could change the logging to report *: on the any address bonding while we are at it.

@shemnon
Copy link
Contributor Author

shemnon commented Nov 20, 2018

The NC-1921 fix alone won't get you IPv6 peers, and the exception you saw is what happens. This patch allows the peers to bond now: https://gist.github.com/shemnon/9fee88df0e492e0ab28d83a2bd9b8df5

@ajsutton
Copy link
Contributor

I'm saying I have working IPv6 peers with no changes from current master, as long as the bootnode is IPv4.

Copy link
Contributor

@ajsutton ajsutton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Using an IPv6 boot node now works.

@shemnon shemnon merged commit 2f2050b into PegaSysEng:master Nov 21, 2018
@shemnon shemnon deleted the NC-1883 branch January 25, 2019 21:47
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants