Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running inside docker, server not sending signal with external IP #53

Closed
amunhoz opened this issue Aug 22, 2021 · 8 comments
Closed

Running inside docker, server not sending signal with external IP #53

amunhoz opened this issue Aug 22, 2021 · 8 comments

Comments

@amunhoz
Copy link

amunhoz commented Aug 22, 2021

I'm running datachannel in a server as an alternative channel to WS to stream data.
But i'm having problems to stabilish a connection, when i checked, the server side (datachannel) was not sending it's external ip in the signal. It looks like datachannel is only sending local ips, then my system timeouts after 8s trying.
Any ideas? Thanks in advance.

Server side:
this.conns[id].peer = new nodeDataChannel.PeerConnection("randomid"+ "_server", { portRangeBegin: 4000, portRangeEnd:4100, iceServers: [ 'stun:stun.l.google.com:19302', 'stun:stun4.l.google.com:19302', 'stun:global.stun.twilio.com:3478?transport=udp' ] });

Signals

Signal sended by browser 
{
    "type": "offer",
    "sdp": "v=0\r\no=- 2716297649924799511 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE 0\r\na=extmap-allow-mixed\r\na=msid-semantic: WMS\r\nm=application 9 UDP/DTLS/SCTP webrtc-datachannel\r\nc=IN IP4 0.0.0.0\r\na=ice-ufrag:Rb6X\r\na=ice-pwd:oSHan4OByCxC4bz2FLAPWIkB\r\na=ice-options:trickle\r\na=fingerprint:sha-256 0C:62:2A:3D:19:28:FB:8D:74:AC:9A:F0:E6:A9:9B:CD:B6:9C:5B:D9:3B:A3:B3:88:F8:0B:E2:F6:1C:E6:D1:38\r\na=setup:actpass\r\na=mid:0\r\na=sctp-port:5000\r\na=max-message-size:262144\r\n"
}

{
    "candidate": {
        "candidate": "candidate:767800524 1 udp 2113937151 9f1fc3c2-bc08-459e-97f7-1785dd139022.local 60637 typ host generation 0 ufrag Rb6X network-cost 999",
        "sdpMLineIndex": 0,
        "sdpMid": "0"
    }
}
{
    "candidate": {
        "candidate": "candidate:842163049 1 udp 1677729535 <<<browser ip removed>>> 55665 typ srflx raddr 0.0.0.0 rport 0 generation 0 ufrag Rb6X network-cost 999",
        "sdpMLineIndex": 0,
        "sdpMid": "0"
    }
}

Signal sended by server (node-datachannel)
{
    "signal": {
        "sdp": "v=0\r\no=rtc 2661493741 0 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE 0\r\na=msid-semantic:WMS *\r\na=setup:active\r\na=ice-ufrag:vRVH\r\na=ice-pwd:A2Lq827UD0rprQ744fNWTV\r\na=ice-options:trickle\r\na=fingerprint:sha-256 7E:C1:60:AC:02:02:A8:B3:FC:B2:E8:42:D7:C4:26:EA:57:C7:A0:4C:D1:39:45:28:B7:14:3B:68:BA:F1:E3:D3\r\nm=application 9 UDP/DTLS/SCTP webrtc-datachannel\r\nc=IN IP4 0.0.0.0\r\na=bundle-only\r\na=mid:0\r\na=sendrecv\r\na=ice-options:trickle\r\na=sctp-port:5000\r\na=max-message-size:262144\r\n",
        "type": "answer"
    }
}

{
    "signal": {
        "candidate": {
            "candidate": "a=candidate:1 1 UDP 2122317823 172.17.0.2 9098 typ host",  //docker internal ip
            "sdpMid": "0"
        }
    }
}

Obs: The udp ports are mapped in the container.

@amunhoz
Copy link
Author

amunhoz commented Aug 22, 2021

I saw this new option in Libdatachannel: "iceTransportPolicy", does it support that?

@murat-dogan
Copy link
Owner

“iceTransportPolicy” is not related to your issue.
It seems to me there is a permission problem.
Could you please try to run your app as root?

@amunhoz
Copy link
Author

amunhoz commented Aug 22, 2021

It has root permissions inside docker only (no network admin permissions to the vm), is that the problem?

@amunhoz
Copy link
Author

amunhoz commented Aug 22, 2021

With network root permissions, it would get the current external ip from the server interfaces. But the ICE server does not supposed to give an external candidate?

@murat-dogan
Copy link
Owner

The problem is about the docker.
I guess using host network mode could solve it.
Could you please try it?
https://docs.docker.com/network/host/

@paullouisageneau
Copy link
Contributor

paullouisageneau commented Aug 22, 2021

But i'm having problems to stabilish a connection, when i checked, the server side (datachannel) was not sending it's external ip in the signal. It looks like datachannel is only sending local ips, then my system timeouts after 8s trying.

The symptom indicates the STUN server could not be reached.

Obs: The udp ports are mapped in the container.

If you mapped the UDP ports manually to the container, you shouldn't try to rely on STUN. The two approaches are:

  • You set a STUN server, you do not redirect the ports, and let STUN hole-punch the NAT and gather the external address
  • You don't set a STUN server, manually redirect the port, and manually replace the host by the external IP address in the outgoing candidate with something like candidate.split(' ').map((f, i) => i === 4 ? "A.B.C.D" : f).join(' ') (libdatachannel has an API to do this but it won't be available from the wrapper).

Server side:
this.conns[id].peer = new nodeDataChannel.PeerConnection("randomid"+ "_server", { portRangeBegin: 4000, portRangeEnd:4100, iceServers: [ 'stun:stun.l.google.com:19302', 'stun:stun4.l.google.com:19302', 'stun:global.stun.twilio.com:3478?transport=udp' ] });

Just a note: you shouldn't use multiple STUN servers. Browsers will try to query all of them, slowing down the connection process, whereas libdatachannel will just pick one. Also, the transport parameter is for a TURN server, it doesn't make sense and will be ignored for STUN (which is over UDP by design).

@amunhoz
Copy link
Author

amunhoz commented Aug 22, 2021

I think the IP replacement or the docker parameter to use host network can fix that for now.
But it looks like this problem occurs with node-webrtc too.
And there is no problem reaching the stun server at all, the ports used to negotiate are in the interval mapped...
I believe this is not something specific of this project.
Thanks for the help!

@amunhoz amunhoz closed this as completed Aug 22, 2021
@paullouisageneau
Copy link
Contributor

Indeed, I'm not sure about how Docker maps the ports, but the problem might actually linked to the fact that the STUN server will be queried from the mapped range. As the outgoing datagrams do not match a previous mapped incoming request, they might be dropped.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants