-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
generate ingress-nginx manifests #133
Conversation
Signed-off-by: Manabu McCloskey <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should use even internally the standard ports: 443 and 80 like for the ingress routes available externally.
Why should we then hack ingress service to use now 8443 ?
- https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml | ||
|
||
patches: | ||
- path: deployment-ingress-nginx.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this line as you dont patch using kustomize -> deployment ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is done to ensure the manifests match what we use currently. If I don't include these, the diff would look like this:
@@ -413,10 +413,6 @@ spec:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
- strategy:
- rollingUpdate:
- maxUnavailable: 1
- type: RollingUpdate
template:
metadata:
labels:
@@ -429,6 +425,7 @@ spec:
containers:
- args:
- /nginx-ingress-controller
+ - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
@@ -436,9 +433,6 @@ spec:
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- - --watch-ingress-without-class=true
- - --publish-status-address=localhost
- - --enable-ssl-passthrough
env:
- name: POD_NAME
valueFrom:
@@ -470,11 +464,9 @@ spec:
name: controller
ports:
- containerPort: 80
- hostPort: 80
name: http
protocol: TCP
- containerPort: 443
- hostPort: 443
name: https
protocol: TCP
- containerPort: 8443
@@ -510,7 +502,7 @@ spec:
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
- terminationGracePeriodSeconds: 0
+ terminationGracePeriodSeconds: **300**
hmmm, interesting problem. I am fine with adding the additonal port 8443, assuming that it will only be used to resolve the in-cluster OIDC issuer. if so, maybe we bake the script into the |
The problem with using 443 and 80 is that ports under 1024 are historically considered privileged and require additional permissions. Some organizations block running applications on these ports. I agree that it would be really nice if we could just use 443 and not have to do this kind of things at all.
We can add an additional target to |
I would also be fine with this being configurable FWIW - I do think we need to support non 443 regardless. In addition to being privileged, theres a common case of having another process already listening on these ports... |
Don't forget that the target audience at the moment of idpbuilder are the developers working on their laptop, having docker installed and they will |
I agree. I think this should be configurable like Greg mentioned. I'll open an issue to track. |
This PR adds a script to generate ingress-nginx manifests.
It also adds an alternative internal https port (8443) for ingress-nginx.
Reason for adding 8443 is that it allows us to support a bit more complex set up where in-cluster URI must match external URI exactly.
For example, for OIDC endpoint discovery to work, the issuer URL given to clients must match what is returned by the OIDC broker. See below:
https://kyecloak.cnoe.localtest.me:8443
on my local machine.https://keycloak.cnoe.localtest.me:8443/realms/cnoe
.https://kyecloak.cnoe.localtest.me:8443
as the issuer URL because the Argo Workflows API server must first contact OIDC's well-known endpoint, and the name will resolve to127.0.0.1
in the pod.kyecloak.cnoe.localtest.me
becomesingress-nginx-controller.ingress-nginx.svc.cluster.local
.https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8443/realms/cnoe
.