Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

Connections going via FDP don't appear in conntrack #1577

Closed
tomwilkie opened this issue Oct 22, 2015 · 10 comments
Closed

Connections going via FDP don't appear in conntrack #1577

tomwilkie opened this issue Oct 22, 2015 · 10 comments
Assignees
Labels
Milestone

Comments

@tomwilkie
Copy link
Contributor

Which kinda breaks scope.

Repro:

docker run -d --name nginx nginx
docker run -d --name client alpine /bin/sh -c "while true; do \
    wget http://nginx.weave.local:80/ -O - >/dev/null || true; \
    sleep 1; \
done"
sudo conntrack -E -p tcp

Pre-1.2 (or with WEAVE_NO_FASTDP=true) you see conntrack events for the connections coming and going.

@tomwilkie
Copy link
Contributor Author

This may be relevant https://lwn.net/Articles/633401/

@dpw
Copy link
Contributor

dpw commented Oct 23, 2015

This may be relevant https://lwn.net/Articles/633401/

Yes, seems so. Though there is the usual issue that it takes a while for kernel changes to propagate widely enough that we can rely on them.

@rade
Copy link
Member

rade commented Oct 29, 2015

Is there anything we can do here, besides waiting for the kernel changes to propagate, that doesn't require Herculian effort?

@rade
Copy link
Member

rade commented Oct 30, 2015

@dpw says we may be able to work around this by introducing an intermediary bridge.

@bboreham
Copy link
Contributor

Does it break the same way when using Docker's new network, which is also using VXLAN?

@dpw
Copy link
Contributor

dpw commented Oct 31, 2015

Does it break the same way when using Docker's new network, which is also using VXLAN?

I wouldn't expect so. It's not due to vxlan.

@tomwilkie tomwilkie changed the title Connection going via FDP don't appear in conntrack Connections going via FDP don't appear in conntrack Oct 31, 2015
@rade rade added this to the 1.3.0 milestone Nov 4, 2015
@awh awh self-assigned this Nov 13, 2015
@awh
Copy link
Contributor

awh commented Nov 13, 2015

I have completed a simple performance test comparing existing fast datapath with fast datapath + intermediary bridge e.g.:

  • veth --> weave <-- network --> weave <-- veth
  • veth --> docker0 <-- veth --> weave <-- network --> weave <-- veth --> docker0 <-- veth

Tested on an Intel i7 5820K @ 3.3 GHz using two Ubuntu 15.04 Virtualbox VMs as Docker hosts. docker0 was connected to weave by a veth on each host, and then a container was started on each, taking care to ensure that the eth0 docker interfaces were allocated different addresses in the same subnet. That way, it was possible to run iperf -s in a container on one host (binding to both the weave and Docker interfaces) and then run iperf -c in a container on the other, targetting either the weave or Docker IP of the remote container.

The performance results on the bare hosts:

$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.48.12 port 5001 connected with 192.168.48.11 port 42088
[  4]  0.0-10.0 sec  1.45 GBytes  1.24 Gbits/sec
[  5] local 192.168.48.12 port 5001 connected with 192.168.48.11 port 42095
[  5]  0.0-10.0 sec  1.57 GBytes  1.35 Gbits/sec
[  4] local 192.168.48.12 port 5001 connected with 192.168.48.11 port 42096
[  4]  0.0-10.0 sec  1.42 GBytes  1.21 Gbits/sec

Performance results via weave:

[  4] local 10.40.0.0 port 5001 connected with 10.32.0.1 port 33922
[  4]  0.0-10.0 sec  1.05 GBytes   902 Mbits/sec
[  5] local 10.40.0.0 port 5001 connected with 10.32.0.1 port 33923
[  5]  0.0-10.0 sec  1.10 GBytes   946 Mbits/sec
[  4] local 10.40.0.0 port 5001 connected with 10.32.0.1 port 33924
[  4]  0.0-10.0 sec  1.20 GBytes  1.02 Gbits/sec

Performance with an intermediary bridge:

[  5] local 172.17.0.3 port 5001 connected with 172.17.0.2 port 33960
[  5]  0.0-10.0 sec   958 MBytes   803 Mbits/sec
[  4] local 172.17.0.3 port 5001 connected with 172.17.0.2 port 33961
[  4]  0.0-10.0 sec  1022 MBytes   855 Mbits/sec
[  5] local 172.17.0.3 port 5001 connected with 172.17.0.2 port 33962
[  5]  0.0-10.0 sec   972 MBytes   813 Mbits/sec

Based on these measurements the intermediary bridge restores conntrack functionality at the cost of a ~15% performance reduction.

@rade
Copy link
Member

rade commented Nov 13, 2015

The use of the docker0 bridge may well skew the results here. It has a bunch of iptable rules hanging off it.

@awh
Copy link
Contributor

awh commented Nov 13, 2015

Good point. I will re-run the tests once we've implemented with our own intermediary bridge.

@awh
Copy link
Contributor

awh commented Nov 24, 2015

I will re-run the tests once we've implemented with our own intermediary bridge.

Test of #1712 using the same setup as before. Bare host performance:

[  4] local 192.168.48.12 port 5001 connected with 192.168.48.11 port 32786
[  4]  0.0-10.0 sec  1.47 GBytes  1.27 Gbits/sec
[  5] local 192.168.48.12 port 5001 connected with 192.168.48.11 port 32787
[  5]  0.0-10.0 sec  1.41 GBytes  1.21 Gbits/sec
[  4] local 192.168.48.12 port 5001 connected with 192.168.48.11 port 32788
[  4]  0.0-10.0 sec  1.37 GBytes  1.18 Gbits/sec

Performance with WEAVE_NO_BRIDGED_FASTDP=1 weave launch:

[  5] local 10.40.0.0 port 5001 connected with 10.32.0.1 port 59091
[  5]  0.0-10.0 sec  1.25 GBytes  1.07 Gbits/sec
[  6] local 10.40.0.0 port 5001 connected with 10.32.0.1 port 59092
[  6]  0.0-10.0 sec  1.15 GBytes   982 Mbits/sec
[  5] local 10.40.0.0 port 5001 connected with 10.32.0.1 port 59093
[  5]  0.0-10.0 sec  1.15 GBytes   987 Mbits/sec

Performance with dedicated intermediary bridge + fastdp:

[  5] local 10.40.0.0 port 5001 connected with 10.32.0.1 port 59075
[  5]  0.0-10.0 sec  1.05 GBytes   904 Mbits/sec
[  6] local 10.40.0.0 port 5001 connected with 10.32.0.1 port 59076
[  6]  0.0-10.1 sec  1.21 GBytes  1.03 Gbits/sec
[  5] local 10.40.0.0 port 5001 connected with 10.32.0.1 port 59077
[  5]  0.0-10.0 sec   958 MBytes   802 Mbits/sec

With a dedicated intermediary bridge lacking iptables rules, the performance difference is ~10%.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

5 participants