-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kommander: Set Grafana home dashboard in Kommander Grafana #386
Changes from all commits
f1e8e3a
8d5e74c
c776429
c6a2509
27d48e1
2bb0ea5
70e56fd
0b6bf9e
39c335a
4fd121b
15f8393
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -7,4 +7,4 @@ maintainers: | |
- name: alejandroEsc | ||
- name: jimmidyson | ||
name: kommander | ||
version: 0.3.20 | ||
version: 0.3.21 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,80 @@ | ||
{{- if .Values.grafana.enabled }} | ||
--- | ||
# Unable to get post-install job hook working, which is why | ||
# this is a regular Job. The retries in the configmap script | ||
# should ensure that this successfully runs once the Grafana | ||
# server is up. | ||
apiVersion: batch/v1 | ||
kind: Job | ||
metadata: | ||
name: {{ .Values.grafana.hooks.jobName | quote }} | ||
namespace: {{ .Release.Namespace }} | ||
labels: | ||
{{ include "kommander.labels" . | indent 4 }} | ||
spec: | ||
template: | ||
metadata: | ||
name: {{ .Values.grafana.hooks.jobName | quote }} | ||
spec: | ||
restartPolicy: Never | ||
containers: | ||
- name: {{ .Values.grafana.hooks.jobName | quote }} | ||
image: {{ .Values.grafana.hooks.image | quote }} | ||
command: ["/bin/sh", "-c", "/job/run.sh"] | ||
env: | ||
- name: X_FORWARDED_USER | ||
valueFrom: | ||
secretKeyRef: | ||
name: {{ .Values.grafana.hooks.secretKeyRef }} | ||
key: username | ||
volumeMounts: | ||
- mountPath: /job | ||
name: job | ||
volumes: | ||
- name: job | ||
configMap: | ||
name: {{ .Values.grafana.hooks.jobName }} | ||
defaultMode: 0777 | ||
--- | ||
apiVersion: v1 | ||
kind: ConfigMap | ||
metadata: | ||
name: {{ .Values.grafana.hooks.jobName }} | ||
data: | ||
run.sh: |- | ||
#!/bin/bash | ||
set -o nounset | ||
set -o errexit | ||
set -o pipefail | ||
CURL="curl --verbose --fail --max-time 30 --retry 20 --retry-connrefused" | ||
DASHBOARD_ID=$($CURL -H "X-Forwarded-User: $X_FORWARDED_USER" {{ .Values.grafana.hooks.serviceURL }}/api/dashboards/uid/{{ .Values.grafana.hooks.homeDashboardUID }} | jq '.dashboard.id') | ||
echo "setting home dashboard to ID" $DASHBOARD_ID | ||
$CURL -X PUT -H "Content-Type: application/json" -H "X-Forwarded-User: $X_FORWARDED_USER" -d '{"homeDashboardId":'"$DASHBOARD_ID"'}' {{ .Values.grafana.hooks.serviceURL }}/api/org/preferences | ||
--- | ||
apiVersion: batch/v1 | ||
kind: Job | ||
metadata: | ||
name: cleanup-{{ .Values.grafana.hooks.jobName }} | ||
namespace: {{ .Release.Namespace }} | ||
labels: | ||
{{ include "kommander.labels" . | indent 4 }} | ||
annotations: | ||
"helm.sh/hook": pre-delete | ||
"helm.sh/hook-weight": "5" | ||
"helm.sh/hook-delete-policy": hook-succeeded | ||
spec: | ||
template: | ||
metadata: | ||
name: cleanup-{{ .Values.grafana.hooks.jobName }} | ||
spec: | ||
serviceAccountName: {{ .Values.grafana.hooks.kommanderServiceAccount }} | ||
containers: | ||
- name: kubectl | ||
image: bitnami/kubectl:1.16.2 | ||
imagePullPolicy: IfNotPresent | ||
command: | ||
- /bin/sh | ||
- -c | ||
- kubectl delete configmap {{ .Values.grafana.hooks.jobName }} --namespace={{ .Release.Namespace }} | ||
restartPolicy: OnFailure | ||
{{- end }} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
{{- if .Values.grafana.enabled }} | ||
apiVersion: batch/v1 | ||
kind: Job | ||
metadata: | ||
name: copy-{{ .Values.grafana.hooks.secretKeyRef }} | ||
namespace: {{ .Release.Namespace }} | ||
labels: | ||
{{ include "kommander.labels" . | indent 4 }} | ||
annotations: | ||
"helm.sh/hook": pre-install | ||
"helm.sh/hook-weight": "5" | ||
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation | ||
spec: | ||
template: | ||
metadata: | ||
name: copy-{{ .Values.grafana.hooks.secretKeyRef }} | ||
spec: | ||
containers: | ||
- name: kubectl | ||
# --export flag is deprecated so we need to stick with this kubectl version | ||
image: bitnami/kubectl:1.16.2 | ||
imagePullPolicy: IfNotPresent | ||
command: | ||
- /bin/sh | ||
- -c | ||
- kubectl create secret generic {{ .Values.grafana.hooks.secretKeyRef }} -n {{ .Release.Namespace }} --from-literal=username=$(kubectl get secret ops-portal-credentials --namespace=kubeaddons --export -o jsonpath="{.data.username}" | base64 --decode) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 👍 |
||
restartPolicy: OnFailure | ||
--- | ||
apiVersion: batch/v1 | ||
kind: Job | ||
metadata: | ||
name: cleanup-{{ .Values.grafana.hooks.secretKeyRef }} | ||
namespace: {{ .Release.Namespace }} | ||
labels: | ||
{{ include "kommander.labels" . | indent 4 }} | ||
annotations: | ||
"helm.sh/hook": pre-delete | ||
"helm.sh/hook-weight": "5" | ||
"helm.sh/hook-delete-policy": hook-succeeded | ||
spec: | ||
template: | ||
metadata: | ||
name: cleanup-{{ .Values.grafana.hooks.secretKeyRef }} | ||
spec: | ||
serviceAccountName: {{ .Values.grafana.hooks.kommanderServiceAccount }} | ||
containers: | ||
- name: kubectl | ||
image: bitnami/kubectl:1.16.2 | ||
imagePullPolicy: IfNotPresent | ||
command: | ||
- /bin/sh | ||
- -c | ||
- kubectl delete secret {{ .Values.grafana.hooks.secretKeyRef }} --namespace={{ .Release.Namespace }} | ||
restartPolicy: OnFailure | ||
{{- end }} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -29,6 +29,15 @@ kubeaddons-catalog: | |
grafana: | ||
enabled: true | ||
|
||
hooks: | ||
jobName: set-kommander-grafana-home-dashboard | ||
image: dwdraju/alpine-curl-jq | ||
secretKeyRef: ops-portal-username | ||
serviceURL: http://kommander-kubeaddons-grafana.kommander | ||
# This is the UID of the "Kubernetes / Compute Resources / Clusters" summary dashboard | ||
homeDashboardUID: efa86fd1d0c121a26444b636a3f509a8 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. is this value hardcoded into the grafana json itself? what is the likelihood this value will change? The original prom job queries the api for the chart by name, then sets the ID whereas here you seem to already know it? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Answered my own question here but i'm wondering if we ever update the chart does the uid change or its set in stone forever. If it changes, we should query by name but if not, then this is fine. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The UID would only change if a dev went in there and changed that field - but it's the same with the name, they're both fields set on the same json. Personally it felt cleaner to me to grab a dashboard by the UID that we set on the dashboard (since we are in charge of creating that dashboard and we know it exists) rather than querying for a name/string (in dcos-monitoring, we grab dashboard by UID). The api call itself also just looks cleaner to me There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm ok with querying for the dashboard by its UID. I think the only downside is that you can't tell at a glance which dashboard we're setting. How about a comment here that mentions the name of the dashboard? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Comment is a good compromise. We may add more dashboards in the future so this helps a human to identify the dashboard without having to grep through all for the uid There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. at least we plumbed it to the values file, good call. I am also hesitant about this but we can figure this out later. |
||
kommanderServiceAccount: kommander-kubeaddons | ||
|
||
## Do not deploy default dashboards. | ||
## | ||
defaultDashboardsEnabled: false | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a hook delete policy for
hook-succeeded
? No need to keep the job and the pods around if everything workedThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately I was not able to get the hook working, so it's just a regular job
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh i see, i thought it was just the post-install hook but it was all hooks. Okay, all good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please put a jira ticket to review this. we should be cleaning up after jobs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://jira.d2iq.com/browse/D2IQ-63670