-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update the nginx controller manifests #103
Conversation
Coverage remained the same at 40.733% when pulling 575ca307c56d94491de3dcd1ec1aa70be4452e6a on luxas:to_deployments into 567fa3b on kubernetes:master. |
@luxas please do use hostnetwork in the default yaml file. This is not a requirement of the controller the rest lgtm |
Why not hostNetwork? I just want to understand... Thanks for reviewing so quickly btw ;) |
Just to avoid confusion. If you use
Good example but keep in mind that in aws there's no issues. You can run the kops addon and scale it to "any" number and it will work.
I think we should find the edge cases in gce and add example to fix or at least avoid that. |
My usecase for using
I agree with not having |
@@ -41,11 +39,6 @@ spec: | |||
valueFrom: | |||
fieldRef: | |||
fieldPath: metadata.namespace | |||
ports: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if the ports aren't used (e.g. in hostNetwork), please leave them so that the pod conflicts with itself for the right reason.
With this change, I suspect the deployment could schedule 2 on the same node and one would just crashloop fighting for the port 80 binding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, forgot that when I removed those lines, added back now.
…pace, be deployments and some general cleanup
I am still against not setting What worries me the most is maintainability when having many versions of the same yaml manifest (especially when there is not a big technical difference). Anyway, I did as you said and here are two different versions of everything with one line change. I see the nginx controller as the best (and only atm) solution for loadbalancing requests into a bare-metal cluster; and then it really should work on bare metal. The CNI issue linked above is really daunting and messes everything up. Kops (when talking AWS) has four networking modes in v1.4 and three in v1.5 (the classic is removed in v1.5), whereof one of them is This is just food for thought. I accept the decision, and would in any case like to get something like this merged, but still think we can/should use hostNetwork until the CNI kubelet networking plugin supports hostPort. |
You are right about this. That said as you already know is really hard to create a one-size-fits-all solution. Please see this PR as an improvement to show how is possible to use the ingress controller with kubeadm. Even now is documented why About kops, there is an addon that works ootb with the default network (kubenet) and calico (available in the next kops version). |
/lgtm |
@luxas thanks! |
Yes, I know it's very hard to make anything k8s-related one-size-fits-all indeed. What I meant about kops is that the current nginx-ingress-controller manifest (without hostNetwork) won't work with calico, weave, flannel or basically anything but kubenet. Right now, kubenet is the only network provider that supports hostPort, so the current manifest is unportable, and that's what's concerning me. If we made it hostNetwork, it would be portable. Are we on the same page now? |
* Update oidc.lua * Update oidc.lua * Update .travis.yml * Update .travis.yml * Update .travis.yml * Update oidc.lua * Update oidc.lua * Update oidc.lua * Update oidc.lua
...and just a little general cleanup
PTAL @aledbf @bprashanth