Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multicast in an overlay network does not work across swarm nodes #740

Closed
dweomer opened this issue Nov 6, 2015 · 3 comments
Closed

multicast in an overlay network does not work across swarm nodes #740

dweomer opened this issue Nov 6, 2015 · 3 comments

Comments

@dweomer
Copy link

dweomer commented Nov 6, 2015

I've provisioned brand new Trusty VMs in our network and, otherwise following the networking overlay guide, used the docker-machine generic driver to install docker 1.9.0 and setup swarm.

I have tested that containers can see each other by pinging from within each via docker exec. I am using elasticsearch:1.7 to test as it will cluster via multicast within seconds of startup which is easy to detect by looking at the logs.

What I am seeing is that the elasticsearch containers will only cluster with each other if they are co-located on the same swarm node. If I am reading #552 correctly this is expected behavior, no? If I am misreading #552 and multicast should be working across swarm nodes, what am I missing?

devops@infra-07-fe:~$ uname -a
Linux infra-07-fe 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 17:43:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

devops@infra-07-fe:~$ docker-machine ls
NAME          ACTIVE   DRIVER    STATE     URL                       SWARM
infra-07-fa   -        generic   Running   tcp://172.16.7.250:2376   infra-07-fd
infra-07-fb   -        generic   Running   tcp://172.16.7.251:2376   infra-07-fd
infra-07-fc   -        generic   Running   tcp://172.16.7.252:2376   infra-07-fd
infra-07-fd   *        generic   Running   tcp://172.16.7.253:2376   infra-07-fd (master)
infra-07-fe   -        generic   Running   tcp://172.16.7.254:2376   

devops@infra-07-fe:~$ docker info
Containers: 9
Images: 10
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 4
 infra-07-fa: 172.16.7.250:2376
  └ Containers: 1
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 16.46 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-30-generic, operatingsystem=Ubuntu 14.04.2 LTS, provider=generic, storagedriver=aufs
 infra-07-fb: 172.16.7.251:2376
  └ Containers: 3
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 16.46 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-30-generic, operatingsystem=Ubuntu 14.04.2 LTS, provider=generic, storagedriver=aufs
 infra-07-fc: 172.16.7.252:2376
  └ Containers: 3
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 16.46 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-30-generic, operatingsystem=Ubuntu 14.04.2 LTS, provider=generic, storagedriver=aufs
 infra-07-fd: 172.16.7.253:2376
  └ Containers: 2
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 16.46 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-30-generic, operatingsystem=Ubuntu 14.04.2 LTS, provider=generic, storagedriver=aufs
CPUs: 16
Total Memory: 65.83 GiB
Name: 54a9edce6870

devops@infra-07-fe:~$ docker-machine ssh infra-07-fd 'docker info'
Containers: 2
Images: 37
Server Version: 1.9.0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 41
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-30-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 4
Total Memory: 15.67 GiB
Name: infra-07-fd
ID: KASU:76FJ:7YXF:6PQ3:YD7B:GH3K:CURQ:EROM:NZY4:HZ6I:JFPP:FBJX
Labels:
 provider=generic
Cluster store: consul://172.16.7.254:8500
Cluster advertise: 172.16.7.253:2376
WARNING: No swap limit support
@dave-tucker
Copy link
Contributor

Hi @dweomer, yes your reading of #552 is correct, this is expected behaviour.
Multicast packets currently do not cross from node to node.
As this seems to be covered by the other issue, are you ok if we close this one?

@dweomer
Copy link
Author

dweomer commented Nov 6, 2015

Yes, @dave-tucker, that is fair. Is there any hope that this issue will be addressed in the next release or otherwise soon-ish?

@dave-tucker
Copy link
Contributor

@dweomer I'm not sure as I don't know of a solution. I'll prompt the maintainers for an update in #552

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants