Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generate ingress-nginx manifests #133

Merged
merged 1 commit into from
Jan 29, 2024

Conversation

nabuskey
Copy link
Collaborator

This PR adds a script to generate ingress-nginx manifests.

It also adds an alternative internal https port (8443) for ingress-nginx.

Reason for adding 8443 is that it allows us to support a bit more complex set up where in-cluster URI must match external URI exactly.

For example, for OIDC endpoint discovery to work, the issuer URL given to clients must match what is returned by the OIDC broker. See below:

  1. Keycloak is installed and configured. Ingress is created, and the endpoint is available at https://kyecloak.cnoe.localtest.me:8443 on my local machine.
  2. Now, I want to configure Argo Workflows to use the Keycloak instance as IdP. Normally, I specify the externally available keycloak issuer url such as https://keycloak.cnoe.localtest.me:8443/realms/cnoe.
  3. Unfortunately, I cannot specify https://kyecloak.cnoe.localtest.me:8443 as the issuer URL because the Argo Workflows API server must first contact OIDC's well-known endpoint, and the name will resolve to 127.0.0.1 in the pod.
  4. I can get it to resolve by creating a rewrite rule in coredns so that kyecloak.cnoe.localtest.me becomes ingress-nginx-controller.ingress-nginx.svc.cluster.local.
  5. Now, the Argo Workflows API server will send requests to https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8443/realms/cnoe.
  6. This doesn't work because ingress-nginx doesn't listen on port 8443.

Signed-off-by: Manabu McCloskey <[email protected]>
Copy link
Contributor

@cmoulliard cmoulliard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use even internally the standard ports: 443 and 80 like for the ingress routes available externally.

Why should we then hack ingress service to use now 8443 ?

- https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml

patches:
- path: deployment-ingress-nginx.yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this line as you dont patch using kustomize -> deployment ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is done to ensure the manifests match what we use currently. If I don't include these, the diff would look like this:

@@ -413,10 +413,6 @@ spec:
       app.kubernetes.io/component: controller
       app.kubernetes.io/instance: ingress-nginx
       app.kubernetes.io/name: ingress-nginx
-  strategy:
-    rollingUpdate:
-      maxUnavailable: 1
-    type: RollingUpdate
   template:
     metadata:
       labels:
@@ -429,6 +425,7 @@ spec:
       containers:
       - args:
         - /nginx-ingress-controller
+        - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
         - --election-id=ingress-nginx-leader
         - --controller-class=k8s.io/ingress-nginx
         - --ingress-class=nginx
@@ -436,9 +433,6 @@ spec:
         - --validating-webhook=:8443
         - --validating-webhook-certificate=/usr/local/certificates/cert
         - --validating-webhook-key=/usr/local/certificates/key
-        - --watch-ingress-without-class=true
-        - --publish-status-address=localhost
-        - --enable-ssl-passthrough
         env:
         - name: POD_NAME
           valueFrom:
@@ -470,11 +464,9 @@ spec:
         name: controller
         ports:
         - containerPort: 80
-          hostPort: 80
           name: http
           protocol: TCP
         - containerPort: 443
-          hostPort: 443
           name: https
           protocol: TCP
         - containerPort: 8443
@@ -510,7 +502,7 @@ spec:
       nodeSelector:
         kubernetes.io/os: linux
       serviceAccountName: ingress-nginx
-      terminationGracePeriodSeconds: 0
+      terminationGracePeriodSeconds: **300**

@nimakaviani
Copy link
Contributor

hmmm, interesting problem.

I am fine with adding the additonal port 8443, assuming that it will only be used to resolve the in-cluster OIDC issuer.

if so, maybe we bake the script into the Makefile to automatically populate the changes to the manifests prior to doing the actual go build? It helps with noticing a diff in the embedded file too when building the binary.

@nabuskey
Copy link
Collaborator Author

We should use even internally the standard ports: 443 and 80 like for the ingress routes available externally.
Why should we then hack ingress service to use now 8443 ?

The problem with using 443 and 80 is that ports under 1024 are historically considered privileged and require additional permissions. Some organizations block running applications on these ports. I agree that it would be really nice if we could just use 443 and not have to do this kind of things at all.

maybe we bake the script into the Makefile to automatically populate the changes to the manifests prior to doing the actual go build? It helps with noticing a diff in the embedded file too when building the binary.

We can add an additional target to make build.

@greghaynes
Copy link
Contributor

I would also be fine with this being configurable FWIW - I do think we need to support non 443 regardless. In addition to being privileged, theres a common case of having another process already listening on these ports...

@cmoulliard
Copy link
Contributor

The problem with using 443 and 80 is that ports under 1024 are historically considered privileged and require additional permissions. Some organizations block running applications on these ports. I agree that it would be really nice if we could just use 443 and not have to do this kind of things at all.

Don't forget that the target audience at the moment of idpbuilder are the developers working on their laptop, having docker installed and they will hate to append the suffix :8443 or :8080 to access the different ingress URLS instead of clicking on the hyperlink directly !

@nabuskey
Copy link
Collaborator Author

Don't forget that the target audience at the moment of idpbuilder are the developers working on their laptop, having docker installed and they will hate to append the suffix :8443 or :8080 to access the different ingress URLS instead of clicking on the hyperlink directly !

I agree. I think this should be configurable like Greg mentioned. I'll open an issue to track.

@nabuskey nabuskey merged commit 5194086 into cnoe-io:main Jan 29, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants