NOTE: files could be found in an appropriate folders
-
cassandra preparation steps
Cass-operator should be installed beforehand e.g. from OperatorHub
Create namespace for cassandra and make sure that it is available for ArgoCD:
oc create namespace argo-cassandra oc config set-context --current --namespace argo-cassandra oc label namespace argo-cassandra argocd.argoproj.io/managed-by=<namespace-with-argocd-instance>
Make sure that you have storage class with reclame policy RETAIN chosen for cassandra persistence.
In order to store secrets.yaml with sensetive data safely in git repository it should be sealed with kubeseal and uploaded to the cluster as SealedSecret resource:
Create cassandra-secret.yaml with the following content:
password=<cassandra superuser password> username=th2
Encrypt the secret, after that it can be stored in git repo:
oc create secret generic cassandra-secret --dry-run=client --from-env-file=tmp/cassandra-secret.yaml -o json > tmp/cassandra-secret.json kubeseal --cert ~/.kube/profiles/okd.sealed.pubkey.pem <tmp/cassandra-secret.json >cassandra-secret.json
Create application in ArgoCD GUI and fill the form as in argo-cassandra.yaml
Or perform CLI command to deploy cassandra:
oc -n <namespace-with-argocd-instance> apply -f argo-cassandra.yaml
Add anyuid SCC to cassandra SA:
oc adm policy add-scc-to-user anyuid -n argo-cassandra -z cassandra
-
monitoring preparation steps
In order to be able to get TH2 custom metrics you need to enable monitoring for user-defined projects.
Create
cluster-monitoring-config
ConfigMap object inopenshift-monitoring namespace
(or edit it if you already have a new one. AddenableUserWorkload: true
under data/config.yaml.):oc apply -f cluster-monitoring-config.yaml
Save the file to apply the changes. Monitoring for user-defined projects is then enabled automatically.
You can optionally create and configure the
user-workload-monitoring-config
ConfigMap object (user-workload-monitoring-config.yaml
) in theopenshift-user-workload-monitoring
project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. EG - you can distribute workload on worker nodes, and configure user workload prometheus persistanceCheck that the
prometheus-operator
,prometheus-user-workload
andthanos-ruler-user-workload
pods are running in theopenshift-user-workload-monitoring
project.oc get po -n openshift-user-workload-monitoring
It might take a short while for the pods to start.
TO;DO - replace helm repo with git repo for helm chart - for restricted environment; Get rid of Multiple Sources for an Application;
Create namespace for application and make sure that it is available for ArgoCD:
oc create ns argo-monitoring oc config set-context --current --namespace argo-monitoring oc label namespace argo-monitoring argocd.argoproj.io/managed-by=openshift-gitops
In case of monitoring stack is been deployed before TH2-infra - grafana dashboards and plugins could be temporary switched off in loki-values.yaml.
Grafana helm repository should be added to /settings/repos in ArgoCD GUI
Create 2 applications in ArgoCD GUI and fill the forms as in argo-monitoring.yaml and grafana-access.yaml
Or perform CLI command to deploy monitoring:
oc -n openshift-gitops apply -f argo-monitoring.yaml
privileged
security context constraint (SCC) should be added to serviceaccounts loki-promtail andanyuid
SCC should be added to SAs loki and loki-grafana:oc adm policy add-scc-to-user privileged -n argo-monitoring -z monitoring-promtail oc adm policy add-scc-to-user anyuid -z monitoring-grafana -n argo-monitoring
Originaly Grafana has no access to OpenShift Prometheus to provide it a proper Authorization HTTP Header should be passed to Prometheus Data Source configuration.
The following creates ServiceAccount in argo-monitoring namespace with Secret of
kubernetes.io/service-account-token
type and bindescluster-monitoring-view ClusterRole
to mentioned ServiceAccount.Required resources should already be created with ArgoCD we can check the secret following way:
oc get secrets -n argo-monitoring | grep grafana-prometheus-access-token
and obtain Authorization HTTP Header with the following command:
echo -n "Bearer "; oc -n argo-monitoring get secrets grafana-prometheus-access-token-hcr6q -o jsonpath="{..token}" | base64 -d ; printf "\n"
The output should be added to grafana.datasources.datasources.yaml.datasources.secureJsonData.httpHeaderValue1
-
Sealed secret preparation steps
Detail documentation for Sealed Secrets can be found here
Create service namespace and make sure that it is available for ArgoCD:
oc create namespace argo-service oc label namespace argo-service argocd.argoproj.io/managed-by=<namespace-with-argocd-instance>
Create application in ArgoCD GUI and fill the form as in argo-secrets.yaml
Or perform CLI command to deploy cassandra:
oc -n <namespace-with-argocd-instance> create -f argo-secrets.yaml
Sealing key renewal is turned off in values.yaml
kubeseal tool provides a way to cipher secrets - it should be installed on operator's host
wget https://github.com/bitnami-labs/sealed-secrets/releases/download/<release-tag>/kubeseal-<version>-linux-amd64.tar.gz tar -xvzf kubeseal-<version>-linux-amd64.tar.gz kubeseal sudo install -m 755 kubeseal /usr/local/bin/kubeseal
Certificate can be fetched from controller log and stored on operator's host:
oc -n argo-service logs sealed-secrets-57cc6d7d5-qltk9
After that kubeseal may be checked e.g. the following way:
kubeseal --cert ~/.kube/profiles/okd.sealed.pubkey.pem
-
th2 installation
argo-service namespace has already been created during Sealed Secrets operator deployment else it should be created the following way:
oc create namespace argo-service oc config set-context --current --namespace argo-service oc label namespace argo-service argocd.argoproj.io/managed-by=<namespace-with-argocd-instance>
To be able to run th2 workloads in multiple namespaces with the same domain name the route admission policy need to be configured the following way:
oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge
In order to store secrets.yaml with sensetive data safely in git repository it should be sealed with kubeseal and uploaded to the cluster as SealedSecret resource:
Create rabbitmq-secret.yaml with the following content:
rabbitmq-password=<my_password> rabbitmq-erlang-cookie=<random_string>
oc create secret generic rabbitmq --dry-run=client --from-env-file=tmp/rabbitmq-secret.yaml -o json > tmp/rabbitmq-secret.json oc create secret generic cassandra --dry-run=client --from-env-file=tmp/cassandra-secret.yaml -o json > tmp/cassandra-secret.json oc create secret docker-registry nexus-proxy --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-password> --dry-run=client -o json > tmp/nexus-proxy.secret.json oc create secret docker-registry nexus-private --docker-server=<your-one-more-registry-server> --docker-username=<your-name> --docker-password=<your-password> --dry-run=client -o json > tmp/nexus-private.secret.json
base64 encoded private key should be inserted to inframgr-secret.json secret manualy:
{ "kind": "SealedSecret", "apiVersion": "bitnami.com/v1alpha1", "metadata": { "name": "inframgr-secret", "namespace": "argo-service", "creationTimestamp": null }, "spec": { "template": { "metadata": { "name": "inframgr-secret", "namespace": "argo-service", "creationTimestamp": null } }, "encryptedData": { "id_rsa": "<HERE> <=== !!! " } } }
Encrypt the secrets, after that they can be stored in git repo:
for SECRET in $(ls tmp/ | grep .json$); do kubeseal --cert ~/.kube/profiles/okd.sealed.pubkey.pem <tmp/$SECRET >secrets/$SECRET; done
We do not install
converter
here, because initially assumed that OpenShift cluster doesn't have write access to git repository.To deploy jupyterhub in OpenShift (OKD)
anyuid
SCC should be added to SA hub:oc adm policy add-scc-to-user anyuid -n argo-service -z hub
Create application in ArgoCD GUI and fill the form as in argo-th2.yaml
Or perform CLI command to deploy:
oc -n <namespace-with-argocd-instance> apply -f argo-th2.yaml