Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support k8gb behind a reverse proxy #1710

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

abaguas
Copy link
Collaborator

@abaguas abaguas commented Aug 25, 2024

Problem

K8GB reads the IP addresses it exposes from Ingress.Status.LoadBalancer.Ingress or from Service.Status.LoadBalancer.Ingress for ingresses configured with Kubernetes Ingress or Istio Virtual Service resources, respectively.
However, in some setups the clients do not route their traffic to these IP addresses because the cluster is behind a reverse proxy.

Solution

To support this setup, K8GB should expose DNS records with the IP address of the reverse proxy. Since the address is unknown to the cluster, the K8GB administrator must provide it via configuration. This PR adds to K8GB the capability to read IP address from an annotation k8gb.io/exposed-ip-addresses on the GSLB resource.

Example

apiVersion: k8gb.absa.oss/v1beta1
kind: Gslb
metadata:
  annotations:
    k8gb.io/external-ips: "185.199.110.153"

Validation

A ValidatingAdmissionPolicy was added to the helm chart to validate the value of the annotation is a valid IPv4 IP address. If that is the not the case the following error is shown:

The gslbs "failover-playground-istio" is invalid: : ValidatingAdmissionPolicy 'validate-strict-exposed-ip-annotation' with binding 'enforce-strict-exposed-ip-annotation' denied request: The annotation 'k8gb.io/exposed-ip-addresses' must contain a valid IPv4 address

If the ValidatingAdmissionPolicy is disabled the controller will also reject the resource with the following error:

2024-10-31T14:02:36Z ERR github.com/k8gb-io/k8gb/controllers/logging/logr.go:72 > events: Reconciler error {"Gslb":"aba/failover-playground-istio","controller":"gslb","controllerGroup":"k8gb.absa.oss","controllerKind":"Gslb","name":"failover-playground-istio","namespace":"aba","reconcileID":"8523077b-b9bf-4206-922b-036eb8e21a7b"} error="getting load balancer exposed IPs (invalid IP address: 127.0.0.1002)"

Note: with the current testing framework it is rather cumbersome to add a test where the errors above are verified. But if we agree to proceed with chainsaw it will be quite easy: #1758

Fixes #1275

@abaguas abaguas marked this pull request as draft August 25, 2024 19:26
@abaguas abaguas marked this pull request as ready for review September 24, 2024 18:26
Copy link
Member

@ytsarev ytsarev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested with the ingress local setup and set the annotation on both test-gslb1 and test-gslb2 local clusters.

k -n test-gslb get ing failover-ingress -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    k8gb.io/exposed-ip-addresses: 185.199.110.153,185.199.109.153
  labels:
    app: test-gslb-failover
  name: failover-ingress
  namespace: test-gslb
spec:
  ingressClassName: nginx
  rules:
  - host: failover.cloud.example.com
    http:
      paths:
      - backend:
          service:
            name: frontend-podinfo
            port:
              name: http
        path: /
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - ip: 172.20.0.6
    - ip: 172.20.0.7
 k -n test-gslb get gslb failover-ingress -o yaml
apiVersion: k8gb.absa.oss/v1beta1
kind: Gslb
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"k8gb.absa.oss/v1beta1","kind":"Gslb","metadata":{"annotations":{},"name":"failover-ingress","namespace":"test-gslb"},"spec":{"resourceRef":{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","matchLabels":{"app":"test-gslb-failover"}},"strategy":{"primaryGeoTag":"eu","type":"failover"}}}
  creationTimestamp: "2024-10-05T10:59:11Z"
  finalizers:
  - k8gb.absa.oss/finalizer
  generation: 2
  name: failover-ingress
  namespace: test-gslb
  resourceVersion: "53488"
  uid: b9237df5-8742-400b-9102-3d40e401ca4a
spec:
  ingress: {}
  resourceRef:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    matchLabels:
      app: test-gslb-failover
  strategy:
    dnsTtlSeconds: 30
    primaryGeoTag: eu
    splitBrainThresholdSeconds: 300
    type: failover
status:
  geoTag: eu
  healthyRecords:
    failover.cloud.example.com:
    - 185.199.110.153
    - 185.199.109.153
  hosts: failover.cloud.example.com
  loadBalancer:
    exposedIps:
    - 185.199.110.153
    - 185.199.109.153
dig @localhost -p 5053 failover.cloud.example.com +short +tcp
185.199.110.153
185.199.109.153

The mechanics work as expected 👍

However, I have some design questions.

In case of Ingress, we are setting k8gb.io/exposed-ip-addresses annotation on the referenced Ingress resource - the one that is getting resourceRef in the main Gslb spec, e.g.

  resourceRef:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    matchLabels:
      app: ingress-referenced

At the same time in case of Istio scenario, we are setting the k8gb.io/exposed-ip-addresses annotation on the Service v1 kind and it is not obvious because what is getting referenced in Gslb spec in VirtualService resource, e.g.

spec:
  resourceRef:
    apiVersion: networking.istio.io/v1
    kind: VirtualService
    matchLabels:
      app: istio

So it looks like we have some form of implementation/abstraction leak here.

I have a slightly modified design proposal: what if we set k8gb.io/exposed-ip-addresses annotation on the main Gslb resource spec itself, this way we just override any referenced network resource and make it independent from the specific underlying referenced resource

apiVersion: k8gb.absa.oss/v1beta1
kind: Gslb
metadata:
  name: test-gslb-failover
  namespace: test-gslb
  annotations:
     k8gb.io/exposed-ip-addresses: 185.199.110.153,185.199.109.153
spec:
  resourceRef:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    matchLabels: # ingresses.networking.k8s.io resource selector
      app: test-gslb-failover
  strategy:
    type: failover 
    primaryGeoTag: 

@abaguas What do you think?

@abaguas
Copy link
Collaborator Author

abaguas commented Oct 8, 2024

I tested with the ingress local setup and set the annotation on both test-gslb1 and test-gslb2 local clusters.

k -n test-gslb get ing failover-ingress -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    k8gb.io/exposed-ip-addresses: 185.199.110.153,185.199.109.153
  labels:
    app: test-gslb-failover
  name: failover-ingress
  namespace: test-gslb
spec:
  ingressClassName: nginx
  rules:
  - host: failover.cloud.example.com
    http:
      paths:
      - backend:
          service:
            name: frontend-podinfo
            port:
              name: http
        path: /
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - ip: 172.20.0.6
    - ip: 172.20.0.7
 k -n test-gslb get gslb failover-ingress -o yaml
apiVersion: k8gb.absa.oss/v1beta1
kind: Gslb
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"k8gb.absa.oss/v1beta1","kind":"Gslb","metadata":{"annotations":{},"name":"failover-ingress","namespace":"test-gslb"},"spec":{"resourceRef":{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","matchLabels":{"app":"test-gslb-failover"}},"strategy":{"primaryGeoTag":"eu","type":"failover"}}}
  creationTimestamp: "2024-10-05T10:59:11Z"
  finalizers:
  - k8gb.absa.oss/finalizer
  generation: 2
  name: failover-ingress
  namespace: test-gslb
  resourceVersion: "53488"
  uid: b9237df5-8742-400b-9102-3d40e401ca4a
spec:
  ingress: {}
  resourceRef:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    matchLabels:
      app: test-gslb-failover
  strategy:
    dnsTtlSeconds: 30
    primaryGeoTag: eu
    splitBrainThresholdSeconds: 300
    type: failover
status:
  geoTag: eu
  healthyRecords:
    failover.cloud.example.com:
    - 185.199.110.153
    - 185.199.109.153
  hosts: failover.cloud.example.com
  loadBalancer:
    exposedIps:
    - 185.199.110.153
    - 185.199.109.153
dig @localhost -p 5053 failover.cloud.example.com +short +tcp
185.199.110.153
185.199.109.153

The mechanics work as expected 👍

However, I have some design questions.

In case of Ingress, we are setting k8gb.io/exposed-ip-addresses annotation on the referenced Ingress resource - the one that is getting resourceRef in the main Gslb spec, e.g.

  resourceRef:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    matchLabels:
      app: ingress-referenced

At the same time in case of Istio scenario, we are setting the k8gb.io/exposed-ip-addresses annotation on the Service v1 kind and it is not obvious because what is getting referenced in Gslb spec in VirtualService resource, e.g.

spec:
  resourceRef:
    apiVersion: networking.istio.io/v1
    kind: VirtualService
    matchLabels:
      app: istio

So it looks like we have some form of implementation/abstraction leak here.

I have a slightly modified design proposal: what if we set k8gb.io/exposed-ip-addresses annotation on the main Gslb resource spec itself, this way we just override any referenced network resource and make it independent from the specific underlying referenced resource

apiVersion: k8gb.absa.oss/v1beta1
kind: Gslb
metadata:
  name: test-gslb-failover
  namespace: test-gslb
  annotations:
     k8gb.io/exposed-ip-addresses: 185.199.110.153,185.199.109.153
spec:
  resourceRef:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    matchLabels: # ingresses.networking.k8s.io resource selector
      app: test-gslb-failover
  strategy:
    type: failover 
    primaryGeoTag: 

@abaguas What do you think?

I added it to the Ingress and Service resources because those are the ones carrying the IP address information. So we would override it there.

I agree with your point, in many cases the user won't know the technical details of the ingress or k8gb to know where to place the annotation. Setting it on the GSLB also avoid annotating a resource that we do not control, which may save us from unexpected corner cases. Let's proceed with your proposal

@abaguas abaguas force-pushed the reverse/proxy branch 2 times, most recently from af3314d to 64327c1 Compare October 13, 2024 19:56
@abaguas abaguas requested a review from ytsarev October 17, 2024 19:37
Copy link
Member

@ytsarev ytsarev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested the happy path scenario of setting k8gb.io/exposed-ip-addresses to valid IP values. Everything worked perfectly with the valid values 👍

Concern:

I can set annotation to something like k8gb.io/exposed-ip-addresses: 192.169.0.test propagating invalid value.

In this case the invalid value is getting propagated as is to both Gslb

k -n test-gslb get gslb failover-ingress -o yaml
...
status:
  geoTag: eu
  healthyRecords:
    failover.cloud.example.com:
    - 192.169.0.test
  hosts: failover.cloud.example.com
  loadBalancer:
    exposedIps:
    - 192.169.0.test

And underlying DNSEndpoint

k -n test-gslb get dnsendpoints failover-ingress -o yaml
...
spec:
  endpoints:
  - dnsName: localtargets-failover.cloud.example.com
    recordTTL: 30
    recordType: A
    targets:
    - 192.169.0.test
  - dnsName: failover.cloud.example.com
    labels:
      strategy: failover
    recordTTL: 30
    recordType: A
    targets:
    - 192.169.0.test

Apart from undesired misconfiguration, it could be used as a security attack vector.
We are missing some validation here. It's probably better to catch it asap at annotation evaluation, maybe by emitting some Failed event

@abaguas abaguas force-pushed the reverse/proxy branch 4 times, most recently from 78af2f6 to 95d5f02 Compare October 31, 2024 14:44
@abaguas abaguas requested a review from ytsarev October 31, 2024 15:19
@ytsarev
Copy link
Member

ytsarev commented Nov 7, 2024

@abaguas could you please resolve the conflict? It looks like it happened after merging another PR :)

Problem
K8GB reads IP addresses from `Ingress.Status.LoadBalancer.Ingress` or from `Service.Status.LoadBalancer.Ingress` for ingress configured with Kubernetes Ingress and Istio Virtual Service, respectively.
The IP addresses exposed by these resources are the IP addresses exposed by the Kubernetes Cluster. However, in some setups the clients do not route their traffic to these IP addresses because the cluster is behind a reverse proxy.

Solution
To support this setup, K8GB should expose DNS records with the IP address of the reverse proxy. Since the address is unknown to the cluster the K8GB administrator must provide it via configuration. This PR adds to K8GB the capability to read IP address from an annotation `k8gb.io/external-ips` on the GSLB resource.

Example
```
apiVersion: k8gb.absa.oss/v1beta1
kind: Gslb
metadata:
  labels:
    app: ingress
  annotations:
    k8gb.io/external-ips: "185.199.110.153"
```

Fixes k8gb-io#1275

Signed-off-by: Andre Baptista Aguas <[email protected]>
@abaguas
Copy link
Collaborator Author

abaguas commented Nov 7, 2024

@abaguas could you please resolve the conflict? It looks like it happened after merging another PR :)

@ytsarev done

Copy link
Member

@ytsarev ytsarev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested e2e, the functionality and the validation works. The concern is dependency for k8s 1.30 by default for validation, so maybe we should keep it false for the time being.

@@ -56,6 +56,9 @@ k8gb:
# -- enable ServiceMonitor
serviceMonitor:
enabled: false
# -- enable validating admission policies
validatingAdmissionPolicy:
enabled: true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following https://open-policy-agent.github.io/gatekeeper/website/docs/validating-admission-policy/#:~:text=The%20Kubernetes%20Validating%20Admission%20Policy,30%20(enabled%20by%20default).

The Kubernetes Validating Admission Policy feature was introduced as an alpha feature to Kubernetes v1.26, beta in v1.28 (disabled by default), GA in v1.30 (enabled by default)

It looks like we are imposing strong dependency on the latest k8s version by default. Maybe we should keep it enabled: false for a while

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Reverse proxy support?
2 participants