-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support k8gb behind a reverse proxy #1710
base: master
Are you sure you want to change the base?
Conversation
8274656
to
d5d76c0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested with the ingress local setup and set the annotation on both test-gslb1 and test-gslb2 local clusters.
k -n test-gslb get ing failover-ingress -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
k8gb.io/exposed-ip-addresses: 185.199.110.153,185.199.109.153
labels:
app: test-gslb-failover
name: failover-ingress
namespace: test-gslb
spec:
ingressClassName: nginx
rules:
- host: failover.cloud.example.com
http:
paths:
- backend:
service:
name: frontend-podinfo
port:
name: http
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- ip: 172.20.0.6
- ip: 172.20.0.7
k -n test-gslb get gslb failover-ingress -o yaml
apiVersion: k8gb.absa.oss/v1beta1
kind: Gslb
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"k8gb.absa.oss/v1beta1","kind":"Gslb","metadata":{"annotations":{},"name":"failover-ingress","namespace":"test-gslb"},"spec":{"resourceRef":{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","matchLabels":{"app":"test-gslb-failover"}},"strategy":{"primaryGeoTag":"eu","type":"failover"}}}
creationTimestamp: "2024-10-05T10:59:11Z"
finalizers:
- k8gb.absa.oss/finalizer
generation: 2
name: failover-ingress
namespace: test-gslb
resourceVersion: "53488"
uid: b9237df5-8742-400b-9102-3d40e401ca4a
spec:
ingress: {}
resourceRef:
apiVersion: networking.k8s.io/v1
kind: Ingress
matchLabels:
app: test-gslb-failover
strategy:
dnsTtlSeconds: 30
primaryGeoTag: eu
splitBrainThresholdSeconds: 300
type: failover
status:
geoTag: eu
healthyRecords:
failover.cloud.example.com:
- 185.199.110.153
- 185.199.109.153
hosts: failover.cloud.example.com
loadBalancer:
exposedIps:
- 185.199.110.153
- 185.199.109.153
dig @localhost -p 5053 failover.cloud.example.com +short +tcp
185.199.110.153
185.199.109.153
The mechanics work as expected 👍
However, I have some design questions.
In case of Ingress, we are setting k8gb.io/exposed-ip-addresses
annotation on the referenced Ingress resource - the one that is getting resourceRef
in the main Gslb spec, e.g.
resourceRef:
apiVersion: networking.k8s.io/v1
kind: Ingress
matchLabels:
app: ingress-referenced
At the same time in case of Istio scenario, we are setting the k8gb.io/exposed-ip-addresses
annotation on the Service v1 kind and it is not obvious because what is getting referenced in Gslb spec in VirtualService resource, e.g.
spec:
resourceRef:
apiVersion: networking.istio.io/v1
kind: VirtualService
matchLabels:
app: istio
So it looks like we have some form of implementation/abstraction leak here.
I have a slightly modified design proposal: what if we set k8gb.io/exposed-ip-addresses
annotation on the main Gslb
resource spec itself, this way we just override any referenced network resource and make it independent from the specific underlying referenced resource
apiVersion: k8gb.absa.oss/v1beta1
kind: Gslb
metadata:
name: test-gslb-failover
namespace: test-gslb
annotations:
k8gb.io/exposed-ip-addresses: 185.199.110.153,185.199.109.153
spec:
resourceRef:
apiVersion: networking.k8s.io/v1
kind: Ingress
matchLabels: # ingresses.networking.k8s.io resource selector
app: test-gslb-failover
strategy:
type: failover
primaryGeoTag:
@abaguas What do you think?
I added it to the Ingress and Service resources because those are the ones carrying the IP address information. So we would override it there. I agree with your point, in many cases the user won't know the technical details of the ingress or k8gb to know where to place the annotation. Setting it on the GSLB also avoid annotating a resource that we do not control, which may save us from unexpected corner cases. Let's proceed with your proposal |
af3314d
to
64327c1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested the happy path scenario of setting k8gb.io/exposed-ip-addresses
to valid IP values. Everything worked perfectly with the valid values 👍
Concern:
I can set annotation to something like k8gb.io/exposed-ip-addresses: 192.169.0.test
propagating invalid value.
In this case the invalid value is getting propagated as is to both Gslb
k -n test-gslb get gslb failover-ingress -o yaml
...
status:
geoTag: eu
healthyRecords:
failover.cloud.example.com:
- 192.169.0.test
hosts: failover.cloud.example.com
loadBalancer:
exposedIps:
- 192.169.0.test
And underlying DNSEndpoint
k -n test-gslb get dnsendpoints failover-ingress -o yaml
...
spec:
endpoints:
- dnsName: localtargets-failover.cloud.example.com
recordTTL: 30
recordType: A
targets:
- 192.169.0.test
- dnsName: failover.cloud.example.com
labels:
strategy: failover
recordTTL: 30
recordType: A
targets:
- 192.169.0.test
Apart from undesired misconfiguration, it could be used as a security attack vector.
We are missing some validation here. It's probably better to catch it asap at annotation evaluation, maybe by emitting some Failed event
78af2f6
to
95d5f02
Compare
@abaguas could you please resolve the conflict? It looks like it happened after merging another PR :) |
Problem K8GB reads IP addresses from `Ingress.Status.LoadBalancer.Ingress` or from `Service.Status.LoadBalancer.Ingress` for ingress configured with Kubernetes Ingress and Istio Virtual Service, respectively. The IP addresses exposed by these resources are the IP addresses exposed by the Kubernetes Cluster. However, in some setups the clients do not route their traffic to these IP addresses because the cluster is behind a reverse proxy. Solution To support this setup, K8GB should expose DNS records with the IP address of the reverse proxy. Since the address is unknown to the cluster the K8GB administrator must provide it via configuration. This PR adds to K8GB the capability to read IP address from an annotation `k8gb.io/external-ips` on the GSLB resource. Example ``` apiVersion: k8gb.absa.oss/v1beta1 kind: Gslb metadata: labels: app: ingress annotations: k8gb.io/external-ips: "185.199.110.153" ``` Fixes k8gb-io#1275 Signed-off-by: Andre Baptista Aguas <[email protected]>
Signed-off-by: Andre Aguas <[email protected]>
95d5f02
to
960e7e9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested e2e, the functionality and the validation works. The concern is dependency for k8s 1.30 by default for validation, so maybe we should keep it false for the time being.
@@ -56,6 +56,9 @@ k8gb: | |||
# -- enable ServiceMonitor | |||
serviceMonitor: | |||
enabled: false | |||
# -- enable validating admission policies | |||
validatingAdmissionPolicy: | |||
enabled: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Kubernetes Validating Admission Policy feature was introduced as an alpha feature to Kubernetes v1.26, beta in v1.28 (disabled by default), GA in v1.30 (enabled by default)
It looks like we are imposing strong dependency on the latest k8s version by default. Maybe we should keep it enabled: false
for a while
Problem
K8GB reads the IP addresses it exposes from
Ingress.Status.LoadBalancer.Ingress
or fromService.Status.LoadBalancer.Ingress
for ingresses configured with Kubernetes Ingress or Istio Virtual Service resources, respectively.However, in some setups the clients do not route their traffic to these IP addresses because the cluster is behind a reverse proxy.
Solution
To support this setup, K8GB should expose DNS records with the IP address of the reverse proxy. Since the address is unknown to the cluster, the K8GB administrator must provide it via configuration. This PR adds to K8GB the capability to read IP address from an annotation
k8gb.io/exposed-ip-addresses
on the GSLB resource.Example
Validation
A
ValidatingAdmissionPolicy
was added to the helm chart to validate the value of the annotation is a valid IPv4 IP address. If that is the not the case the following error is shown:If the
ValidatingAdmissionPolicy
is disabled the controller will also reject the resource with the following error:Note: with the current testing framework it is rather cumbersome to add a test where the errors above are verified. But if we agree to proceed with chainsaw it will be quite easy: #1758
Fixes #1275