Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(charts): mongodb persistence + ingressClassName support and configuration improvements suggestion #273

Closed
wants to merge 11 commits into from
Closed

Conversation

Grraahaam
Copy link
Contributor

@Grraahaam Grraahaam commented Jan 29, 2023

Description 📣

fix

  • Replaced metadata.annotations.kubernetes.io/ingress.class: "nginx" with the recommended spec.ingressClassName in the ingress resource (details here and there)

  • Simplified value paths in templates (e.g. .Values.backend to $backend) reducing the paths length

feat

  • MongoDB storage persistence by using PVC and linking the database storage to it

refacto

  • Refactored the values.yaml and the way we pass configuration variables to our pods by preferring native k8s secrets, and using envFrom

docs

  • Added more detailed installation instructions

I admit that this PR might contain a lot of different changes, but all related to our helm-charts. Ping me if you rather want me to split those changes into dedicated PRs

Type ✨

Bug fix
New feature
Breaking change
Documentation

Tests 🛠️

Helm

# Infisical
cd ./helm-charts/infisical
helm lint
helm template infisical .

# Infisical Secrets Operator
cd ./helm-charts/secrets-operator
helm lint
helm template secrets-operator .

@Grraahaam
Copy link
Contributor Author

Another PR in progress to easily setup a local k8s development environment 👷🏽

@maidul98
Copy link
Collaborator

Hey @Grraahaam I'll review this soon!

image: "{{ .Values.backend.image.repository }}:{{ .Values.backend.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.backend.image.pullPolicy }}
- name: {{ template "infisical.name" . }}-{{ $backend.name }}
image: "{{ $backend.image.repository }}:{{ $backend.image.tag | default .Chart.AppVersion }}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if we should get rid of the default .Chart.AppVersion.Because we would never have a image tag that is the .Chart.AppVersion. Those aren't related. What do you think

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, since the infisical chart contains several services it doesnt really make sense here ✌🏽

It could be like this instead, wdyt?

Suggested change
image: "{{ $backend.image.repository }}:{{ $backend.image.tag | default .Chart.AppVersion }}"
image: "{{ $backend.image.repository }}:{{ $backend.image.tag | default 'latest' }}"

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, perfect. Let's do the same for frontend as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 3a11898

{{- end }}
env:
- name: MONGO_URL
value: {{ include "infisical.mongodb.connectionString" . | quote }}
{{- if .Values.backendEnvironmentVariables }}
{{- range $key, $value := .Values.backendEnvironmentVariables }}
Copy link
Collaborator

@maidul98 maidul98 Jan 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason why you deleted the ability to write envs right into the deployment file? I'm gussing because it is more secure and can be stored in kubeSecretRef? Or were you thinking or something else

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh i see, you put them in a secret below

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason why you deleted the ability to write envs right into the deployment file?

Storing secrets in a dedicated resource is a better alternative IMO, it allows you to fine-grain the access and do much more than if it was directly defined in the deployment, see https://kubernetes.io/docs/concepts/security/secrets-good-practices/ for best practices ✌️

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, this is better approach

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the variable backendEnvironmentVariables was completely removed. This will break existing installations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the variable backendEnvironmentVariables was completely removed. This will break existing installations.

Yes, IMO it makes more sense to define those variables under the backend (and frontend as well) key, since those are required for each service. That's why I've specified "Breaking change" in my PR description.

Since we're still in early stages, breaking changes won't impact a lot of people so far, and they'll still be able to use the previous chart version if ever they don't want to break their current installation, right?

I think it's best to define a good structure now and avoid supporting deprecated keys at this stage, what do you think?

I can still add support for backendEnvironmentVariables if ever we want to avoid breaking existing installations ✌️

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well, it's up to @maidul98 if you want to keep this non breaking or not. I just wanted to point it out 😅

name: {{ $backend.kubeSecretRef | default "infisical-backend" }}
type: Opaque
data:
{{- range $key, $value := $backend.secrets }}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious, let's say I removed some secrets from the values file for backend. Would upgrading the helm chart also refect a one to one to this secret? Or do old values that no longer exist in the values file continue to exist but only only overwritten

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Defined secrets in values.yaml are the one deployed in k8s. If you remove one from the values file, it'll remove it from the corresponding secret k8s resource as well ✌️ (just did the test locally to be sure)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome

readinessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 10
periodSeconds: 10
{{- if .Values.frontend.kubeSecretRef }}
{{- if $frontend.kubeSecretRef }}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the user has to define a kubeSecretRef + define the environment variables in values.frontend.secrets ? I feel like if they are defining values.frontend.secrets then we should just put those in a secret and auto reference them. Thoughts?

Copy link
Contributor Author

@Grraahaam Grraahaam Jan 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, my bad! Here's my suggestion (to be applied on front/backend)

# For the deployment
...
envFrom:
  - secretRef:
       name: {{ $backend.kubeSecretRef | default "infisical-backend" }}
...
# For the secrets
{{ if not $backend.kubeSecretRef }}
apiVersion: v1
kind: Secret
...
{{- end }}
# For the value file (reduce the `kubeSecretRef` indentation)
...
backend:
  name: backend
  ...
  # Provide your own or leave it empty for default
  kubeSecretRef: "my-own-secrets"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 8a66561

@@ -1,3 +1,4 @@
{{- $ctrlManager := .Values.controllerManager }}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All changes for the secrets-operator helm chart gets auto generated by Kubebuilder. This means that these changes will be overwritten on next release. So feel free to remove or leave it as is

data:
{{- range $key, $value := $frontend.secrets }}
{{ $key }}: {{ $value | b64enc }}
Copy link
Collaborator

@maidul98 maidul98 Jan 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should wrap the value in quotes before b64enc otherwise we get

rror: UPGRADE FAILED: template: infisical/templates/frontend-deployment.yaml:76:26: executing "infisical/templates/frontend-deployment.yaml" at <b64enc>: wrong type for value; expected string; got float64

The alternative is peple have to put quotes around every value on their own

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 3ac9f98

What value did you try (redacted if needed)? I've never encountered this issue in my tests 👀

Copy link
Collaborator

@maidul98 maidul98 Jan 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just when you add booleans or numbers as values directly

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, I was able to reproduce your issue! And the above fix patched it

@maidul98
Copy link
Collaborator

Everything else looks great! I was able to run it on digital ocean. However, when i did

helm upgrade infisical .  --values /Users/maidulislam/Desktop/k8-values/dev/values.yaml --recreate-pods

I expected the data in the volume to presist, but reloading my page seems like the data was lost. Fyi this is on Digital ocean (I was able to see the PVC there so it definitely got created)

@Grraahaam
Copy link
Contributor Author

helm upgrade infisical .  --values /Users/maidulislam/Desktop/k8-values/dev/values.yaml --recreate-pods

Ow, what Helm version are you running, helm version --short? 🤔 (Since --recreate-pods have been deprecated since Helm3, explained here)

I expected the data in the volume to persist, but reloading my page seems like the data was lost. Fyi this is on Digital ocean (I was able to see the PVC there so it definitely got created)

I've checked locally, and it works, here's the steps to reproduce what I did :

# Define variables (edit the values according your needs)
values="values.yaml"
select="{}, { email: 1, firstName: 1, lastName: 1 }"
insert="{
  email: '[email protected]',
  firstName: 'john',
  lastName: 'doe',
  publicKey: 'xxx',
  encryptedPrivateKey: 'xxx',
  iv: 'xxx',
  tag: 'xxx',
  salt: 'xxx',
  verifier: 'xxx',
  refreshVersion: 0,
}"

cd ./helm-charts/infisical

# install the infisical chart
helm upgrade --install --create-namespace -n infisical \
  -f $values --wait \
  infisical .

# get the mongo pod name/id
mongo_pod=$(kubectl get pod -n infisical -l component=mongodb -o jsonpath="{.items[0].metadata.name}")

# inserting a fake user
kubectl exec -i -n infisical $mongo_pod -- \
  mongosh -u root -p root --quiet --eval \
  "db.users.insertOne($insert)"

# check if it exists
kubectl exec -i -n infisical $mongo_pod -- \
  mongosh -u root -p root --quiet --eval \
  "db.users.find($select)"

# delete the pod
kubectl delete pod -n infisical $mongo_pod

# get the new mongo pod name/id
mongo_pod=$(kubectl get pod -n infisical -l component=mongodb -o jsonpath="{.items[0].metadata.name}")

# should still be able to fetch the above created fake user
kubectl exec -i -n infisical $mongo_pod -- \
  mongosh -u root -p root --quiet --eval \
  "db.users.find($select)"

apiVersion: v1
kind: Secret
metadata:
name: {{ $backend.kubeSecretRef | default "infisical-backend" }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the name default to include "infisical.backend.fullname" .? Otherwise multiple helm install without customization will have colliding names.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Fixed in cf95466

kind: Secret
metadata:
name: {{ $frontend.kubeSecretRef | default "infisical-frontend" }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as for the backend secret. The default should probably be include "infisical.frontend.fullname" .

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Fixed in cf95466

@jon4hz
Copy link
Contributor

jon4hz commented Jan 31, 2023

While I appreciate the effort regarding the mongodb persistent, I suggest to refactor that part completely and implement #182 instead. Bitnamis mongodb chart is extremely good and offers much more flexibility.

If the bitnami mongodb chart is is on infisicals roadmap anyway, it would be easier to implement that now. Currently people probably don't use the mongo part of this chart due to the missing persistence. But if that changes, it will be much harder to switch later.

@Grraahaam
Copy link
Contributor Author

Thanks @jon4hz ! Indeed, the current mongodb persistence was mostly a POC and I was already looking at Bitnami's MongoDB chart, I'll probably try to implement it here then (as a customizable dependency) , this way we'll benefit of a well-tested and maintained chart!

Once it'll be done, both #173 and #182 will be solved ✌️

@@ -1,3 +1,4 @@
{{- $mongodb := .Values.mongodb }}
apiVersion: apps/v1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make this deployment optional (in case we want to use an external database)?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what we can do is if externalMongoDBConnectionString is defined then no need to spin this deployment up. What do you think @Grraahaam

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zifeo @maidul98 As mentioned above, I'm re-structuring the chart to follow best practices (e.g. sub-charts, referencing), it'll allow you to fine-grain what service you want to enable (e.g. you'll be able to run only front/back if you need)

I'll try to do this without adding too many breaking changes, but it'll likely breaks existing installation at some point... But we'll still be able to point to the previous chart through its version ✌🏽

I think what we can do is if externalMongoDBConnectionString is defined then no need to spin this deployment up. What do you think @Grraahaam

It might be something like this, pretty similar in the idea I guess !

@Grraahaam Grraahaam mentioned this pull request Feb 8, 2023
5 tasks
@Grraahaam Grraahaam marked this pull request as draft February 15, 2023 08:57
@Grraahaam
Copy link
Contributor Author

Partly replaced by #314 and #353

I'll open a new PR for the Helm chart structure improvements ✌️

@Grraahaam Grraahaam closed this Feb 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants