Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add istio dependency #7

Merged
merged 2 commits into from
Feb 2, 2022
Merged

Conversation

mikenairn
Copy link
Member

@mikenairn mikenairn commented Jan 28, 2022

Update namespace to kuadrant-system

Changes the default namespace from kuadrant-operator-system to kuadrant-system. The kuadrant controller is currently hard coded to use kuadrant-system when creating resources, so using it here ensures all kuadrant resources end up in the same ns. Will also make docs between the operator/controller/kuadrantcl repos more consistent.

Add istio make commands

Add istio makefile with targets to help install/uninstall istio using istoctl. The default is to install it in it's own namespace
istio-system since this is more likely how it will be deployed in a real world scenario. The install is also using the default profile which installs an ingress controller into the istio namespace istio-ingressgateway. Any example port-forward commands need to point to this ingress service:

kubectl port-forward -n istio-system service/istio-ingressgateway 9080:80

A temporary patch for the istio install and make targets to configure a hard coded kuadrant/authorino setup for dev/test purposes is also added. These are triggered using seperate make targets istio-install-with-patch and post-deploy-hacks and will be removed once the operator itself has taken over the responsibility of creating/configuring these resources.

Verification

Deploy no OLM:

make kind-create-kuadrant-cluster

or

Deploy with OLM:

make kind-create-cluster install-olm istio-install deploy-olm istio-install-with-patch

Check deployment:
Note: If you deployed via OLM there will be some additional deployments/pods in the kuadrant-system ns not listed here.

$ kubectl get deployments -n kuadrant-system
NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
authorino                               1/1     1            1           108s
authorino-operator-controller-manager   1/1     1            1           2m10s
kuadrant-controller-manager             1/1     1            1           2m10s
kuadrant-operator-controller-manager    1/1     1            1           2m10s
limitador-operator-controller-manager   1/1     1            1           2m10s
$ kubectl get pods -n kuadrant-system
NAME                                                     READY   STATUS    RESTARTS   AGE
authorino-5bfb56f59c-5xrrz                               1/1     Running   0          2m39s
authorino-operator-controller-manager-748fdd4494-cx9hf   2/2     Running   0          3m1s
kuadrant-controller-manager-5dbff54bdd-8w6p5             2/2     Running   0          3m1s
kuadrant-operator-controller-manager-9d9d56bcd-zlmw9     2/2     Running   0          3m1s
limitador-operator-controller-manager-584fd799fd-x84db   2/2     Running   0          3m1s
$ kubectl get deployments -n istio-system
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
istio-ingressgateway   1/1     1            1           3m55s
istiod                 1/1     1            1           4m41s
$ kubectl get gateways -n kuadrant-system
NAME               AGE
kuadrant-gateway   4m4s

@mikenairn mikenairn force-pushed the add_istio_dependency branch 2 times, most recently from e9ad6f7 to 63d1722 Compare January 28, 2022 11:26
@mikenairn mikenairn requested review from maleck13, eguzki and a team January 28, 2022 12:20
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=10
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would remove the rbac proxy from the deployment. Maybe in other PR.

We removed it from the 3scale operator 3scale/3scale-operator#692

3scale Ops team has also removed the rbac proxy from their operator 3scale-ops/prometheus-exporter-operator#26

Check out for the reasons in the PRs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created follow up JIRA for this https://issues.redhat.com/browse/KUADRANT-32

Changes the default namespace from `kuadrant-operator-system` to
`kuadrant-system`.  The kuadrant controller is currently hard coded to
use `kuadrant-system` when creating resources, so using it here ensures
all kuadrant resources end up in the same ns.  Will also make docs
between the operator/controller/kuadrantcl repos more consistent.
Add istio makefile with targets to help install/uninstall istio using
istoctl. The default is to install it in it's own namespace
`istio-system` since this is more likely how it will be deployed in a
real world scenario. The install is also using the `default` profile
which installs an ingress controller into the istio namespace
`istio-ingressgateway`. Any example port-forward commands need to point
to this ingress service:

```
kubectl port-forward -n istio-system service/istio-ingressgateway 9080:80
```

A temporary patch for the istio install and make targets to configure a
hard coded kuadrant/authorino setup for dev/test purposes is also added.
These are triggered using seperate make targets `istio-install-with-patch`
and `post-deploy-hacks` and will be removed once the operator itself has
taken over the responsibility of creating/configuring these resources.
@mikenairn mikenairn force-pushed the add_istio_dependency branch from 63d1722 to ed446ea Compare February 2, 2022 11:13
@mikenairn mikenairn merged commit 1c1b1a6 into Kuadrant:main Feb 2, 2022
@mikenairn mikenairn deleted the add_istio_dependency branch February 2, 2022 11:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants