-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gh 639 policy controller #293
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #293 +/- ##
==========================================
- Coverage 65.21% 64.00% -1.21%
==========================================
Files 35 35
Lines 3806 3806
==========================================
- Hits 2482 2436 -46
- Misses 1137 1169 +32
- Partials 187 201 +14
Flags with carried forward coverage won't be shown. Click here to find out more.
|
68ace0c
to
871ecbd
Compare
hmm code coverage went down but only changed YAMLs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It works as expected, creating the DNS records and the TLS connection. I've done so setting up AWS as my DNS provider and creating a ManagedZone for my custom host, then applying the DNSPolicy. I've setup everything in default
namespace, leaving kuadrant-system
for the operator/controllers work. You'll need to rebase and fix the conflicts of the bundle
969a0e3
to
c80a41e
Compare
@didierofrivia I have updated with a comment and rebased. @alexsnaps thoughts on merging this now? We don't have any docs for DNS and TLS policy in the single cluster context yet, so this would put the policy controller in place as part of the release for gwapi v1 but would have no supporting docs right now. A "stealth" deploy if you will. Docs will come soon and def part of our next "unified kuadrant" release |
c80a41e
to
96350e4
Compare
update bundle
96350e4
to
944a36c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the steps I had to follow in my environment to make this work completely, until getting a 200 response to the curl command:
pwd
# ~/go/src/github.com/kuadrant/kuadrant-operator
export KIND_EXPERIMENTAL_DOCKER_NETWORK=kuadrant-local
docker network create -d bridge --subnet 172.31.0.0/16 $KIND_EXPERIMENTAL_DOCKER_NETWORK --gateway 172.31.0.1 \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.driver.mtu"="1500"
make local-cluster-setup
make install-olm
make generate manifests bundle
make deploy-catalog CATALOG_IMG=quay.io/kuadrant/kuadrant-operator-catalog:gh-639-policy-controller
kubectl get crd/dnspolicies.kuadrant.io
# NAME CREATED AT
# dnspolicies.kuadrant.io 2023-11-16T12:54:04Z
kubectl get crd/tlspolicies.kuadrant.io
# NAME CREATED AT
# tlspolicies.kuadrant.io 2023-11-16T12:54:04Z
make install-metallb
kubectl -n metallb-system apply -f -<<EOF
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: kuadrant-local
spec:
addresses:
- 172.31.200.0/24
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: empty
EOF
export ROOT_DOMAIN=<ROOT_DOMAIN>
kubectl create namespace ingress
kubectl -n ingress apply -f -<<EOF
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: prod-web
spec:
gatewayClassName: istio
listeners:
- allowedRoutes:
namespaces:
from: All
name: specific
hostname: "*.$ROOT_DOMAIN"
port: 443
protocol: HTTPS
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: gui-hcpapps-tls
EOF
kubectl -n ingress apply -f -<<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: prod-ca
spec:
selfSigned: {}
EOF
kubectl -n ingress apply -f -<<EOF
apiVersion: kuadrant.io/v1alpha1
kind: TLSPolicy
metadata:
name: prod-web
spec:
targetRef:
name: prod-web
group: gateway.networking.k8s.io
kind: Gateway
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: prod-ca
EOF
export AWS_ACCESS_KEY_ID=… \
AWS_REGION=… \
AWS_SECRET_ACCESS_KEY=… \
AWS_DNS_PUBLIC_ZONE_ID=…
kubectl -n ingress create secret generic aws-credentials \
--type=kuadrant.io/aws \
--from-literal=AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
--from-literal=AWS_REGION=$AWS_REGION \
--from-literal=AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
kubectl -n ingress apply -f - <<EOF
apiVersion: kuadrant.io/v1alpha1
kind: ManagedZone
metadata:
name: prod-cluster
spec:
id: $AWS_DNS_PUBLIC_ZONE_ID
domainName: $ROOT_DOMAIN
description: Kuadrant single cluster
dnsProviderSecretRef:
name: aws-credentials
namespace: ingress
EOF
kubectl -n ingress apply -f -<<EOF
apiVersion: kuadrant.io/v1alpha1
kind: DNSPolicy
metadata:
name: prod-web
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: prod-web
routingStrategy: simple
EOF
kubectl apply -f -<<EOF
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: echo
spec:
parentRefs:
- kind: Gateway
name: prod-web
namespace: ingress
hostnames:
- echo.$ROOT_DOMAIN
rules:
- backendRefs:
- name: echo
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: echo
spec:
ports:
- name: http-port
port: 8080
targetPort: http-port
protocol: TCP
selector:
app: echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: docker.io/jmalloc/echo-server
ports:
- name: http-port
containerPort: 8080
protocol: TCP
EOF
dig echo.$ROOT_DOMAIN
# […]
# ;; ANSWER SECTION:
# echo.<ROOT_DOMAIN>. 60 IN A 172.31.200.0
curl -k https://echo.$ROOT_DOMAIN -i
# HTTP/2 200
Nice, Thanks for the detailed steps @guicassolato . Could be useful to add these to a script or doc I guess for setting up Kuadrant single cluster. I wonder do we want to add some of this to |
All the above please. |
Draft PR adding the new policy controller to the deployment and CSV for kuadrant operator
This PR adds in the policy controller via kustomize it also add metal lb as an option
Verification
make local-cluster-setup
Build a new bundle and catalog and install it in to the cluster
expect to see a new
kuadrant-operator-policy-controller
in thekuadrant-system
namespacevalidate that the DNSPolicy and TLSPolicy CRDS are present in the cluster
Extra Validation
Before doing this you will need an AWS credential that gives access to route53 a domain (talk to craig as he can set you up with this)
deploy metallb to a local cluster
make install-metallb
k create -f config/metallb/metal-lb.yaml
create a new gateway isito gateway
Setup TLS
create an issuer
create a TLSPolicy
Observe the gateway is now configured and has an address and tls secrets
Setup DNS
setup a dnsprovider
https://docs.kuadrant.io/multicluster-gateway-controller/docs/dnspolicy/dns-provider/
setup a managedzone
https://docs.kuadrant.io/multicluster-gateway-controller/docs/dnspolicy/managed-zone/
create a dnspolicy
setup a httproute and backend
should be able to dig your domain or curl it if you can resolve the IP
example
dig specific.cb.hcpapps.net
clean up