Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Kubernetes networking over AWS VPC #1047

Closed
camilb opened this issue Nov 29, 2017 · 15 comments
Closed

Kubernetes networking over AWS VPC #1047

camilb opened this issue Nov 29, 2017 · 15 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@camilb
Copy link
Contributor

camilb commented Nov 29, 2017

https://github.com/aws/amazon-vpc-cni-k8s

@mumoshu
Copy link
Contributor

mumoshu commented Dec 1, 2017

Fun info: Calico will be enchanced to utilize Security Groups as the backend for network policies
Ref: aws/amazon-vpc-cni-k8s#7 (comment)

@mumoshu
Copy link
Contributor

mumoshu commented Dec 1, 2017

I'm really looking forward to this!

My only concern is that the limit of IPs per ENI e.g. 10 for c4.large. 10 pods per node at maximum? Sounds a bit low.

It is indeed better than the cni plugin for ECS which is limited by the number of max ENIs per EC2 instance(3 for c3.large. Only 2 pods per node?), though.

Ref: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI

@liwenwu-amazon
Copy link

A c4.large node should be able to run 27 Pods.
Here is the calculation:
3 ENIs * (10 IP addresses -1)

@mumoshu
Copy link
Contributor

mumoshu commented Dec 1, 2017

@liwenwu-amazon Thanks for chiming in!

Yes, it should -
Does amazon-vpc-cni-k8s already support utilizing multiple ENIs per EC2 instance to serve e.g. 3 ENIs * (10 IPs - 1), btw?

@liwenwu-amazon
Copy link

Yes.

@pawelprazak
Copy link

So if I understand it correctly, according to the proposal the calculation would be e.g.:

Max IPs = min((N * M - N), subnet's free IP)

for m4.xlarge: 4*15-4 = 56 pods max

please correct me if I'm wrong

@camilb
Copy link
Contributor Author

camilb commented Dec 5, 2017

@mumoshu
Copy link
Contributor

mumoshu commented Dec 5, 2017

@liwenwu-amazon Great! Thanks for clarifying 👍

Would you also mind enlightening us about how amazon-vpc-cni-k8s compares with cni-ipvlan-vpc-k8s?

One gotcha of the latter from lyft folks is that source pod IPs are lost in the Pod-Svc-Pod communication, like kube-proxy was in the user-space mode.

amazon-vpc-cni-k8s doesn't have a such restriction, right?

@mumoshu
Copy link
Contributor

mumoshu commented Dec 5, 2017

@liwenwu-amazon
Copy link

amazon-vpc-cni-k8s will NOT change pod IPs for Pod-Svc-Pod communication.

@mumoshu
Copy link
Contributor

mumoshu commented Dec 5, 2017

@liwenwu-amazon Thanks again for the clarification.
Really looking forward to try it - Please keep up the great work!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants