Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add routes to all peered VPCs (config option) #24

Merged
merged 1 commit into from
Jan 27, 2018

Conversation

theatrus
Copy link
Contributor

If the key routeToVpcPeers is set to true on the IPAM
configuration, all known peered VPC CIDRs will be added to the IPvlan
route table allowing for direct VPC<->VPC communication.

Fixes #21 / #23

paulnivin
paulnivin previously approved these changes Jan 26, 2018
If the key `routeToVpcPeers` is set to `true` on the IPAM
configuration, all known peered VPC CIDRs will be added to the IPvlan
route table allowing for direct VPC<->VPC communication.

Fixes lyft#21 / lyft#23
@PaulFurtado
Copy link

PaulFurtado commented Jun 27, 2018

Hey there,

I'm not sure how your VPCs are configured, but I'm curious about the choice of discovering routes via the DescribeVpcPeeringConnections API instead of the DescribeRouteTables API (to get the route table associated with the subnet of the ENI).

In our case, we have several VPC peering connections, which this picks up on, but we also have many routes going through "Virtual Private Gateways" as well as through other instances. For background, virtual private gateways are the thing that Direct Connect fiber connections as well as managed VPN tunnels go through, and the routes directly through other instances are for other VPN solutions.

An example of one of our route tables from the AWS console:
screenshot from 2018-06-26 21-29-42

In fact, in that screenshot you'll see that there are 2 VPC peering connections in the route table, but there are actually 4 VPC peering connections in this VPC. The code in master picks up on all 4, but only the two in the route table are actually routeable from the subnets it is associated with.

I'm working on a PR to pull the routes from the routing table, but I'm curious if there is some configuration where the DescribeVpcPeeringConnections feature would still be necessary. If so, I can make each feature individually toggleable and merge/deduplicate the routes.

Thanks!
Paul

@theatrus
Copy link
Contributor Author

We currently don't use any virtual gateways, however this would be a good addition. I can double check how our network setup looks now (which has gotten more complicated over time, as they do).

@lbernail
Copy link
Contributor

lbernail commented Jun 27, 2018

We are relying on the vpc peering feature and in our case it's enough because we don't have VPNs configured (yet?) and we add the peering routes to all our pod subnets.

I was wondering if an alternative could be to use the ipvlan interface as the default route and only send explicit traffic through the PTP one:

  • service CIDR (that would require an additional flag)
  • node IP (to avoid exiting to instance to get back to it on the primary interface)
  • AWS metadata (I think it can be accessed from the pods but this would break kube2iam)
  • maybe other ranges explicitely defined

The advantage would be that most traffic would go through the IPVLAN interface but it would not allow for nodes with public IPs (today traffic outside the VPC and peers is masqueraded on the primary interface).

@PaulFurtado
Copy link

I've thought about the same, it may make sense to also have the option of just supplying a list of routes in the config file.

The reason the default route must go through the host is that in a public subnet, only a private ip with a public ip assigned to it is able to reach the internet. In a private subnet, that is unnecessary though.

That said, there is an easy alternative for public subnets... since there are just 3 private ip ranges, you can set a route for each over the ipvlan interface via the subnet gateway: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 and then keep your default route pointed at the host. I was using this strategy before I started using this CNI plugin so I wouldn't need so many rules and it's resilient to subnet changes (if you bring a new vpc online, you shouldn't have to roll every pod). The one expectation it breaks is for people who have configured explicit routes for internet ip ranges in their aws route tables, which isn't that common, but has valid use cases.

That said, I think the automatic configuration via the routing table is a sane enough auto configuring feature, but the only way to handle every use case would be making the list of routes configurable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants