Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kommander: Set Grafana home dashboard in Kommander Grafana #386

Merged
2 changes: 1 addition & 1 deletion stable/kommander/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ maintainers:
- name: alejandroEsc
- name: jimmidyson
name: kommander
version: 0.3.20
version: 0.3.21
Binary file removed stable/kommander/charts/kommander-karma-0.3.2.tgz
Binary file not shown.
Binary file not shown.
Binary file removed stable/kommander/charts/kommander-thanos-0.1.7.tgz
Binary file not shown.
Binary file not shown.
8 changes: 4 additions & 4 deletions stable/kommander/requirements.lock
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ dependencies:
version: 1.192.0
- name: kommander-thanos
repository: https://mesosphere.github.io/charts/stable
version: 0.1.7
version: 0.1.8
- name: kommander-karma
repository: https://mesosphere.github.io/charts/stable
version: 0.3.2
version: 0.3.3
- name: grafana
repository: https://kubernetes-charts.storage.googleapis.com
version: 3.8.19
digest: sha256:b7956484edf4a924bd40849e84fb1618144566e12ed4240a10ee2997dedfaf7e
generated: "2020-01-24T15:35:44.3664693Z"
digest: sha256:bb7ed5c5badacc606782b5d682cdcd18359f43a2cb246eba0a6d36cd4e212a1e
generated: "2020-01-29T09:48:03.609569-08:00"
4 changes: 2 additions & 2 deletions stable/kommander/requirements.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ dependencies:
repository: "https://mesosphere.github.io/kommander/charts"
condition: kommander-ui.enabled
- name: kommander-thanos
version: "0.1.7"
version: "0.1.8"
repository: "https://mesosphere.github.io/charts/stable"
condition: kommander-thanos.enabled
- name: kommander-karma
version: "0.3.2"
version: "0.3.3"
repository: "https://mesosphere.github.io/charts/stable"
condition: kommander-karma.enabled
- name: grafana
Expand Down
80 changes: 80 additions & 0 deletions stable/kommander/templates/grafana/hooks-home-dashboard.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
{{- if .Values.grafana.enabled }}
---
# Unable to get post-install job hook working, which is why
# this is a regular Job. The retries in the configmap script
# should ensure that this successfully runs once the Grafana
# server is up.
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Values.grafana.hooks.jobName | quote }}
namespace: {{ .Release.Namespace }}
labels:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add a hook delete policy for hook-succeeded? No need to keep the job and the pods around if everything worked

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately I was not able to get the hook working, so it's just a regular job

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh i see, i thought it was just the post-install hook but it was all hooks. Okay, all good.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please put a jira ticket to review this. we should be cleaning up after jobs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

{{ include "kommander.labels" . | indent 4 }}
spec:
template:
metadata:
name: {{ .Values.grafana.hooks.jobName | quote }}
spec:
restartPolicy: Never
containers:
- name: {{ .Values.grafana.hooks.jobName | quote }}
image: {{ .Values.grafana.hooks.image | quote }}
command: ["/bin/sh", "-c", "/job/run.sh"]
env:
- name: X_FORWARDED_USER
valueFrom:
secretKeyRef:
name: {{ .Values.grafana.hooks.secretKeyRef }}
key: username
volumeMounts:
- mountPath: /job
name: job
volumes:
- name: job
configMap:
name: {{ .Values.grafana.hooks.jobName }}
defaultMode: 0777
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.grafana.hooks.jobName }}
data:
run.sh: |-
#!/bin/bash
set -o nounset
set -o errexit
set -o pipefail
CURL="curl --verbose --fail --max-time 30 --retry 20 --retry-connrefused"
DASHBOARD_ID=$($CURL -H "X-Forwarded-User: $X_FORWARDED_USER" {{ .Values.grafana.hooks.serviceURL }}/api/dashboards/uid/{{ .Values.grafana.hooks.homeDashboardUID }} | jq '.dashboard.id')
echo "setting home dashboard to ID" $DASHBOARD_ID
$CURL -X PUT -H "Content-Type: application/json" -H "X-Forwarded-User: $X_FORWARDED_USER" -d '{"homeDashboardId":'"$DASHBOARD_ID"'}' {{ .Values.grafana.hooks.serviceURL }}/api/org/preferences
---
apiVersion: batch/v1
kind: Job
metadata:
name: cleanup-{{ .Values.grafana.hooks.jobName }}
namespace: {{ .Release.Namespace }}
labels:
{{ include "kommander.labels" . | indent 4 }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: cleanup-{{ .Values.grafana.hooks.jobName }}
spec:
serviceAccountName: {{ .Values.grafana.hooks.kommanderServiceAccount }}
containers:
- name: kubectl
image: bitnami/kubectl:1.16.2
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- kubectl delete configmap {{ .Values.grafana.hooks.jobName }} --namespace={{ .Release.Namespace }}
restartPolicy: OnFailure
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
{{- if .Values.grafana.enabled }}
apiVersion: batch/v1
kind: Job
metadata:
name: copy-{{ .Values.grafana.hooks.secretKeyRef }}
namespace: {{ .Release.Namespace }}
labels:
{{ include "kommander.labels" . | indent 4 }}
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
spec:
template:
metadata:
name: copy-{{ .Values.grafana.hooks.secretKeyRef }}
spec:
containers:
- name: kubectl
# --export flag is deprecated so we need to stick with this kubectl version
image: bitnami/kubectl:1.16.2
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- kubectl create secret generic {{ .Values.grafana.hooks.secretKeyRef }} -n {{ .Release.Namespace }} --from-literal=username=$(kubectl get secret ops-portal-credentials --namespace=kubeaddons --export -o jsonpath="{.data.username}" | base64 --decode)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

restartPolicy: OnFailure
---
apiVersion: batch/v1
kind: Job
metadata:
name: cleanup-{{ .Values.grafana.hooks.secretKeyRef }}
namespace: {{ .Release.Namespace }}
labels:
{{ include "kommander.labels" . | indent 4 }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: cleanup-{{ .Values.grafana.hooks.secretKeyRef }}
spec:
serviceAccountName: {{ .Values.grafana.hooks.kommanderServiceAccount }}
containers:
- name: kubectl
image: bitnami/kubectl:1.16.2
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- kubectl delete secret {{ .Values.grafana.hooks.secretKeyRef }} --namespace={{ .Release.Namespace }}
restartPolicy: OnFailure
{{- end }}
14 changes: 11 additions & 3 deletions stable/kommander/templates/hooks-kubeaddons.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,18 @@ metadata:
labels:
{{ include "kommander.labels" . | indent 4 }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook": pre-install,pre-delete
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
{{- if .Values.grafana.enabled }}
- apiGroups: [""]
resources: ["secrets", "configmaps"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
Expand All @@ -22,7 +27,7 @@ metadata:
labels:
{{ include "kommander.labels" . | indent 4 }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook": pre-install,pre-delete
"helm.sh/hook-weight": "2"
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
roleRef:
Expand All @@ -33,6 +38,9 @@ subjects:
- kind: ServiceAccount
name: default
namespace: {{ .Release.Namespace }}
- kind: ServiceAccount
name: {{ template "kommander.fullname" . }}
namespace: {{ .Release.Namespace }}
---
apiVersion: batch/v1
kind: Job
Expand All @@ -42,7 +50,7 @@ metadata:
labels:
{{ include "kommander.labels" . | indent 4 }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "3"
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
spec:
Expand Down
9 changes: 9 additions & 0 deletions stable/kommander/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,15 @@ kubeaddons-catalog:
grafana:
enabled: true

hooks:
jobName: set-kommander-grafana-home-dashboard
image: dwdraju/alpine-curl-jq
secretKeyRef: ops-portal-username
serviceURL: http://kommander-kubeaddons-grafana.kommander
# This is the UID of the "Kubernetes / Compute Resources / Clusters" summary dashboard
homeDashboardUID: efa86fd1d0c121a26444b636a3f509a8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this value hardcoded into the grafana json itself? what is the likelihood this value will change? The original prom job queries the api for the chart by name, then sets the ID whereas here you seem to already know it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Answered my own question here but i'm wondering if we ever update the chart does the uid change or its set in stone forever. If it changes, we should query by name but if not, then this is fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UID would only change if a dev went in there and changed that field - but it's the same with the name, they're both fields set on the same json. Personally it felt cleaner to me to grab a dashboard by the UID that we set on the dashboard (since we are in charge of creating that dashboard and we know it exists) rather than querying for a name/string (in dcos-monitoring, we grab dashboard by UID). The api call itself also just looks cleaner to me /api/search/?query=Kubernetes+%2F+Compute+Resources+%2F+Cluster vs /api/dashboards/uid/{{ .Values.grafana.hooks.homeDashboardUID

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok with querying for the dashboard by its UID. I think the only downside is that you can't tell at a glance which dashboard we're setting. How about a comment here that mentions the name of the dashboard?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment is a good compromise. We may add more dashboards in the future so this helps a human to identify the dashboard without having to grep through all for the uid

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at least we plumbed it to the values file, good call. I am also hesitant about this but we can figure this out later.

kommanderServiceAccount: kommander-kubeaddons

## Do not deploy default dashboards.
##
defaultDashboardsEnabled: false
Expand Down