-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nomad Docker bridge, provide support for Fan Networking #1800
Comments
@tlvenn Just read about Fan, looks very similar to many SDN solutions, virtual IP + NATing. This will most likely be solved outside of the Docker driver |
Hi @dadgar, It's similar as it deals with multi host networking and yet very different in scope and features. It's very simple, does not provide acl/security, multiple networks and does not use Docker 1.9 network plugins. It simply leverage the bridge networking mode that Nomad already understand. When you run a container, you dont specify any network related options that somehow Nomad need to be aware of etc.. So while it's a networking feature at its core, I kinda hope it can be treated differently than a SDN solution in term of priority/milestone given unless I am overlooking something, should be reasonably simple to implement without putting you in a corner when SDN solution will be supported as they will operate in a different network mode (not in bridge mode). FanNetwork is also supported by LXC/LXD which means your LXC driver could leverage it the same and without any major effort you could ship / introduce limited(bride mode only) multi host networking for Docker and LXC but that would already be a pretty good stepping stone until Nomad can support SDN networks. |
@tlvenn It isn't quite that simple. If you care about the service discovery part of Nomad, then some how Nomad needs to get the IPAM results, encapsulated or not. I run a similarly simple networking model, in that each Nomad client node has the Docker bridge plugged into a flat /16 L2 network and I let the Docker daemon on the node handles the IPAM out of a /24 subnet for my containers. Without Nomad, by way of Consul, registering the correct IPs of the containers, all I can rely on is Docker's native NATing of the /24 and all services on that node points to the Docker bridge. If you have your own plumbing for service registry and discovery, then you can use any networking model of your choice any way. Nomad doesn't get in the way. |
Hi @dvusboy
Once the container is created and its ID is known, a simple InspectContainer will reveal the container IP assigned by docker on the bridge. In this case, I would expect nomad to publish the container IP to consul for service discovery and to not do any dynamic ports mapping and forwarding to the host.
True but unless I missed something, there is no way to prevent Nomad from forwarding ports to the host automatically on dynamic ports which makes little sense when containers can talk to each other directly. |
And I think this:
is the specific ask that we're all waiting for, regardless of the networking model. In my case, I have to use The path to |
Ya dont get me started on docker volume support... It's honestly pretty ridiculous that we had to wait until 0.5 for them to finally enable this simple feature because they have a bigger plan to address stateful containers as a whole in Nomad in some futur release which is great but it shouldn't create so much friction for the community in the meantime. And sadly it seems history is repeating itself with their grand vision on how to address and handle SDNs as a whole and not facilitating/handling multi host networking in the simple case where the docker bridge transparently already provide this. Don't get me wrong, Nomad is a fantastic and promising product but sometimes they seem to make choices that directly hinder community adoption for no obvious reason. |
We just started evaluating the macvlan driver to use the underlay network for L3 routing to containers. To solve this issue today I only see two options:
Both of these options kind of interfere with a clean and native container deployment process I have in mind. An option for Nomad to register a service with the IP of an interface from gathered from an inspection would really be useful. Kind of what container pilot does from within the container:
|
#2709 will allow advertising the IP:Port defined by the Docker driver and will be in 0.6. If you use a Docker network driver other than host or bridge (default) then that IP:Port will automatically be advertised. You can control this behavior on a per- |
Hi. I am running similar scenario: Nomad v0.7.1, docker with brigde network and flannel to provide overlay network. I am NOT doind any port mappings and I do not want to map any ports to host ports. I want to use container_ip:port and this works, it is just that Nomad does not display this anywhere.
|
Great question @Garagoth. At the moment we simply lack the API for it. The current allocation API returns the networks assigned by the scheduler and therefore has no knowledge of the IPs and Ports assigned by a driver on the client node. We currently lack the new APIs (or plumbing for the existing APIs) to get this data from the task driver back to the servers. It's absolutely something we will address in the future. In the meantime you'll have to use Consul as your source-of-truth for service addresses in Nomad. |
Um, let me try to understand: Nomad is able to get proper IP from Docker and pass it to Consul for health checking (so it is able to get this data from driver on client node), but is not able to get the same IP and display it in the UI? So this is client node -> server communication issue? And while we are at exposing container IP - please add interpolated variable with it :-) |
Yup. I know it seems like a trivial thing, but we have quite the backlog of "trivial" things we'd like to get to. The problem is rarely writing the code, it's making sure we're writing the right code because once data makes it into an API we need to maintain it there (more or less) forever. For example we can't just change the current address being advertised because it would break backward compatibility.
Since driver networks are only defined after a task is started, the service and check stanzas are the only places it could be interpolated. The current service and check (In the future network plugins may allow us to have address information available before we start a task and that would open up a lot of options! Not something that will make it into the 0.8 release I'm afraid.) |
But in this situation correct IP is advertised to consul, but no IP at all is advertised back to nomad server.
Well, right now i have to do command in docker config section, something like
Um... before docker container is created? I thought that since docker assigns IPs this is something nearly impossible to know in advance...? Same with for example LXC+macvlan+DHCP combo (I used that with great results) |
In my testing Docker only assigns IPs after starting a container. I'd love to hear a way to get an IP without starting the container! If we're able to do that then we can also satisfy the problem of not being able to interpolate the IP for your binary to use. |
Closing this issue as Feel free to reopen if I misunderstand the original issue, but please open a new issue for related features/bugs as this issue is getting pretty long. |
Does not work with advertising IP6 addresses: #6412 |
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
My understanding regarding Nomad and Docker networking is that Nomad doesn't currently understand overlay networks but natively handle
host
andbridge
network mode.With
bridge
network mode, the assumption is it will be the docker bridge and so there is no cross host container networking, forcing Nomad to fallback to port mapping on the host IP.I am using Ubuntu Fan Network ( https://wiki.ubuntu.com/FanNetworking ) to provide docker with a bridge where cross host container networking is provided transparently. In such case, I would like Nomad to use the container IP that docker allocate instead of relying on port mapping.
Could you add a option in the docker driver to indicate if the bridge can provide multi host networking and if such, make use of the container ip ?
I believe this should be fairly trivial to implement and hopefully can see the light before work is starting/done toward providing support for overlay networks, SDN and the like.
Thanks in advance.
The text was updated successfully, but these errors were encountered: