Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the nginx controller manifests #103

Merged
merged 2 commits into from
Jan 5, 2017

Conversation

luxas
Copy link
Member

@luxas luxas commented Jan 3, 2017

  • to run in the kube-system namespace
  • to use hostnetwork
  • to use deployments instead of rcs
  • to work with kubeadm
  • to create the service by manifest

...and just a little general cleanup

PTAL @aledbf @bprashanth

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jan 3, 2017
@k8s-reviewable
Copy link

This change is Reviewable

@coveralls
Copy link

Coverage Status

Coverage remained the same at 40.733% when pulling 575ca307c56d94491de3dcd1ec1aa70be4452e6a on luxas:to_deployments into 567fa3b on kubernetes:master.

@aledbf
Copy link
Member

aledbf commented Jan 3, 2017

to use hostnetwork

@luxas please do use hostnetwork in the default yaml file. This is not a requirement of the controller
If you need to use hostnetwork maybe you can add an example?

the rest lgtm
Thanks!

@luxas
Copy link
Member Author

luxas commented Jan 3, 2017

Why not hostNetwork?
HostPort fails on everything but gce with kubenet which kind of defeats the purpose of a bare-metal loadbalancer...

I just want to understand... Thanks for reviewing so quickly btw ;)

@aledbf
Copy link
Member

aledbf commented Jan 3, 2017

Why not hostNetwork?

Just to avoid confusion. If you use hostNetwork it seems is mandatory.

HostPort fails on everything but gce with kubenet which kind of defeats the purpose of a bare-metal loadbalancer...

Good example but keep in mind that in aws there's no issues. You can run the kops addon and scale it to "any" number and it will work.

kubectl get deployment
NAME                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
grpc-weather            1         1         1            1           4d
ingress-nginx           1         1         1            1           22d
nginx-default-backend   1         1         1            1           22d
old-mbp:administrador-saas aledbf$ kubectl scale deployment ingress-nginx --replicas=5
deployment "ingress-nginx" scaled

$ kubectl get deployment
NAME                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
grpc-weather            1         1         1            1           4d
ingress-nginx           5         5         5            5           22d
nginx-default-backend   1         1         1            1           22d

$ kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
grpc-weather-2395281943-uw3to           1/1       Running   0          4d
ingress-nginx-1750395818-5i5w3          2/2       Running   0          21s
ingress-nginx-1750395818-9y6fb          2/2       Running   0          21s
ingress-nginx-1750395818-cf1te          2/2       Running   0          21s
ingress-nginx-1750395818-hkrsg          2/2       Running   0          21s
ingress-nginx-1750395818-weosl          2/2       Running   2          6d
nginx-default-backend-623339674-g3ttf   1/1       Running   1          22d

I think we should find the edge cases in gce and add example to fix or at least avoid that.

@euank
Copy link
Contributor

euank commented Jan 3, 2017

My usecase for using hostNetwork for mine is twofold: 1) ipv6 support for things directly accessing the ingress and 2) preserve source IP correctly, regardless of docker's configured proxy mode / iptables setup.

hostNetwork does have a chance of breaking the HTTP healthcheck though, depending on the node's network configuration.

I agree with not having hostNetwork set, but with documenting the above gotchas and reasons for using it as an alternate configuration.

@@ -41,11 +39,6 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
Copy link
Contributor

@euank euank Jan 3, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even if the ports aren't used (e.g. in hostNetwork), please leave them so that the pod conflicts with itself for the right reason.

With this change, I suspect the deployment could schedule 2 on the same node and one would just crashloop fighting for the port 80 binding.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, forgot that when I removed those lines, added back now.

@luxas
Copy link
Member Author

luxas commented Jan 5, 2017

I am still against not setting hostNetwork because people might think it's a requirement.
I added a comment that clearly states what the reasons are for enabling or disabling it.

What worries me the most is maintainability when having many versions of the same yaml manifest (especially when there is not a big technical difference).

Anyway, I did as you said and here are two different versions of everything with one line change.

I see the nginx controller as the best (and only atm) solution for loadbalancing requests into a bare-metal cluster; and then it really should work on bare metal. The CNI issue linked above is really daunting and messes everything up.

Kops (when talking AWS) has four networking modes in v1.4 and three in v1.5 (the classic is removed in v1.5), whereof one of them is external which is a super-custom un-spec'd plugin that I guess not many are using for real. Left we have kubenet (which **only works on GCE, GKE and AWS w/ kops) and cni.
So if you're on AWS you'd have to use the kubeadm manifest with hostNetwork if you're using CNI and you can use hostNetwork or just hostPort if using kubenet.

This is just food for thought. I accept the decision, and would in any case like to get something like this merged, but still think we can/should use hostNetwork until the CNI kubelet networking plugin supports hostPort.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.02%) to 40.711% when pulling 7cf6fe3 on luxas:to_deployments into 567fa3b on kubernetes:master.

@aledbf
Copy link
Member

aledbf commented Jan 5, 2017

What worries me the most is maintainability when having many versions of the same yaml manifest (especially when there is not a big technical difference).

You are right about this. That said as you already know is really hard to create a one-size-fits-all solution. Please see this PR as an improvement to show how is possible to use the ingress controller with kubeadm. Even now is documented why hostNetwork could not work if you use cni or not :)

About kops, there is an addon that works ootb with the default network (kubenet) and calico (available in the next kops version).
https://github.com/kubernetes/kops/blob/master/addons/ingress-nginx/v1.4.0.yaml
I'm not sure what issues you found using kops. Can you post more information about the issues?

@aledbf aledbf self-assigned this Jan 5, 2017
@aledbf
Copy link
Member

aledbf commented Jan 5, 2017

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jan 5, 2017
@aledbf aledbf merged commit 5186e93 into kubernetes:master Jan 5, 2017
@aledbf
Copy link
Member

aledbf commented Jan 5, 2017

@luxas thanks!

@luxas
Copy link
Member Author

luxas commented Jan 5, 2017

Yes, I know it's very hard to make anything k8s-related one-size-fits-all indeed.

What I meant about kops is that the current nginx-ingress-controller manifest (without hostNetwork) won't work with calico, weave, flannel or basically anything but kubenet. Right now, kubenet is the only network provider that supports hostPort, so the current manifest is unportable, and that's what's concerning me. If we made it hostNetwork, it would be portable.

Are we on the same page now?

haoqing0110 pushed a commit to stolostron/management-ingress that referenced this pull request Mar 5, 2021
* Update oidc.lua

* Update oidc.lua

* Update .travis.yml

* Update .travis.yml

* Update .travis.yml

* Update oidc.lua

* Update oidc.lua

* Update oidc.lua

* Update oidc.lua
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants