Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

Create Best Practices doc #44

Closed
viglesiasce opened this issue Aug 25, 2016 · 54 comments
Closed

Create Best Practices doc #44

viglesiasce opened this issue Aug 25, 2016 · 54 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@viglesiasce
Copy link
Contributor

We need a place where we can establish and document best practices for chart creation. This should then be reference from the README in order to point users to this repositories conventions.

@viglesiasce
Copy link
Contributor Author

Things that need to be discussed:

  • Upgrading
  • Configuration files
  • Command line flags

@prydonius
Copy link
Member

Thanks for capturing this @viglesiasce. @sameersbn put together a great set of kubernetes manifest and helm chart authoring guides (https://github.com/bitnami/charts/tree/master/_docs) that could provide a base for something.

@chrislovecnm
Copy link
Contributor

The one thing that has been bugging me is docker standards. @viglesiasce we probably need to get together with a couple of people to discuss, or we can hijack a sig apps.

@chrislovecnm
Copy link
Contributor

I know I am beating this dead horse just a bit, but we need to go through the existing charts and get http://kubernetes.io/docs/user-guide/compute-resources/ laid out. Probably should update any standards docs / guidelines that we have.

@michelleN
Copy link
Member

@chrislovecnm
Copy link
Contributor

@michelleN should we set 15 min in the helm sig or sig-apps to knock this out? I am knee deep in writing the HA upgrade guide, so I am not sure how we can knock this out.

@viglesiasce
Copy link
Contributor Author

@chrislovecnm have a look at the technical requirements that I bolstered today. I think it will be best to just build out that section in the docs:
https://github.com/kubernetes/charts/blob/master/CONTRIBUTING.md#technical-requirements

@chrislovecnm
Copy link
Contributor

Nice! Added a bunch of notes to the doc.

@chrislovecnm
Copy link
Contributor

We need a linter to check for that stuff btw

@chrislovecnm
Copy link
Contributor

For ease of reading

  • camel case vs uppercase for vars
  • Helper templates and other toyaml kungfu
  • Use docker's from main product? Redis maintains there own docker
  • Where do we put Dockerfiles in the repo?
  • Break up docker name and version number
  • Sections for the Readme - template?
  • How are we going to name charts that have multiple variants
    For example Redis without ha, redis with slaves, redis clustered
    Do we have one massive nasty chart or three charts?

Also upgrades are not supported by Petsets... Just a note to the section

@gtaylor
Copy link
Collaborator

gtaylor commented Oct 26, 2016

My pet list of things to consistify, after reading through a good number of these charts:

  • values.yaml flat vs nested/grouped
  • yaml list indention or flat
  • standard ways to define resource limits (I like what we're seeing with the toYaml method, subjectively)
  • comment style (two hashes vs 1)
  • documenting things in values.yaml and/or README.md. afraid to duplicate right now.
  • outlined expectations on when ConfigMaps are expected.
  • guidance on standard PVC usage patterns
  • since AWS ELBs can't forward UDP, we should have a standard pattern for Services and a blurb for NOTES.txt (NodePort?)

@mgoodness
Copy link
Contributor

My best practice recs:

  • static container names
    • if one container, use {{ template name . }}
    • if multiple, use {{ template name . }}-{{ .Values.container1.name }}, {{ template name . }}-{{ .Values.container2.name }}, etc.
  • app labels on resources should also use {{ template name . }} instead of current {{ template fullname . }} pattern
  • charts that consist of multiple "components" should assign static component labels to the respective resources
    • component services should also have component as a selector

I explained my reasoning for the first two items in #168 (specifically here, here, and here).

The last item is a pattern I use at work, explained here.

@sputnick
Copy link
Contributor

sputnick commented Nov 16, 2016

I would like to add a best practice item request... I would love to see the recommendation and practice of using a global property

persistence:
  enabled: true/false

in all official stable charts that completely disables PVs/PVCs in the main Chart and Dependent charts.

Why?
On k8s clusters with no PVs you have to go figure out what non standard parameters disable the PVC for each snowflake chart values and then also for the its dependencies. This is true e.g. for existing charts like WordPress in K8s Stable (you need to turn off PVs for MariaDB specifically).

In my view, this is a UX problem for newbies using the official Stable charts because on a cluster like "kube-solo" or "minikube", that many start on, there are no PVs and in those cases there will never be a mixed situation where part of the chart should have PVs. If this became a standard (you should also be able to individually turn chart PVCs off or on) it will undoubtedly mean less grief for users and chart maintainers...

Also, if this became a best practice standard then perhaps this could be added as a Helm install option (further improving UX).

@gtaylor
Copy link
Collaborator

gtaylor commented Nov 16, 2016

I would like to add a best practice item request... I would love to see the recommendation and practice of using a global property

I'm -1 on this, as outlined in the matching discussion on the gitlab-ce PR:

#202 (comment)

tl;dr: Some charts like factorio, gitlab, and even Cassandra store various bits of data in different places. There are cases when some of your users might want persistence for one, but not the other. For example, we'd be actively wasting money/quota/resources for many of our factorio users, since the default install doesn't come with any mods to persist.

I think our policy of sane defaults covers the split, in that users with special needs will read the chart README/values and adjust as needed.

On k8s clusters with no PVs you have to go figure out what non standard parameters disable the PVC for each snowflake chart values and then also for the its dependencies

What percentage of our userbase do we think will be put in a situation like this? Is it enough to warrant the loss of flexibility?

Part of this is requiring 1.4+, which it looks like we may be leaning towards (based on @prydonius last comment and the thumbs ups): helm/helm#1528

@sputnick
Copy link
Contributor

@gtaylor did you read my answer on #202? The use case is clusters with no persistent storage which right now are pretty much all out of the box do-it-yourself dev cluster like minikube, kube-solo, kube-cluster and I'm suggesting having both flags per dependency and a global, not either or. So more flexible.

@viglesiasce
Copy link
Contributor Author

Another one for this list:

Ensure repository (not just tag) is configurable.
https://github.com/kubernetes/charts/pull/276/files

@blakebarnett
Copy link
Contributor

Echoing what @mgoodness said above, app labels are problematic (and long) when they include the release, I've also noticed that many charts don't have an app label either. We've needed to add component labels to various charts also so that we can select pieces to use with kubectl port-forward, etc. I also add the release label to selectors for specificity there.

Example changes for the prometheus server service:

@@ -2,7 +2,7 @@
 kind: Service
 metadata:
   labels:
-    app: {{ template "fullname" . }}
+    app: "{{ .Chart.Name }}"
     chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
     component: "{{ .Values.server.name }}"
     heritage: "{{ .Release.Service }}"
@@ -15,6 +15,7 @@
       protocol: TCP
       targetPort: 9090
   selector:
-    app: {{ template "fullname" . }}
+    app: "{{ .Chart.Name }}"
+    release: "{{ .Release.Name }}"
     component: "{{ .Values.server.name }}"
   type: "{{ .Values.server.serviceType }}"

@chancez
Copy link
Contributor

chancez commented Dec 20, 2016

I'd like to know what the best practices are for indicating whether values are required or optional.

@mgoodness
Copy link
Contributor

In addition to my previous suggestions, I've taken to changing the fullname template in my internal charts. Rather than having it generate RELEASE_NAME-CHART_NAME, it's flipped to CHART_NAME-RELEASE_NAME. If a chart handles multiple components, I'll do something like CHART_NAME-COMPONENT-RELEASE_NAME. I find it's easier to view multiple releases of the same chart in kubectl when using that pattern.

@viglesiasce
Copy link
Contributor Author

Pod names should include both the release name and chart name. When using a chart as a dependency its good to be able to distinguish pods that belong to the same release.

@prydonius,

Do we have a macro for this yet or is the _helper template for "fullname" the best we have right now?

@prydonius
Copy link
Member

@viglesiasce the fullname template is the best we have for now, see helm/helm#1186 - looks like it's slated for 2.3.0

@so0k
Copy link
Contributor

so0k commented Jan 7, 2017

would it make sense to say:

it is preferred to include defaults in a packaged values.yaml over using the default function in the manifest templates?

it seems having default everywhere in a template makes it less readable, and a secure default values.yaml needs to be provided according to the technical requirements anyway?

(plus, you can see a lot of documentation is added to most stable chart values.yaml - which is a single place to look instead of that information being spread over the files in templates folder)

I've tried to take into account the comments in this thread, as well as Bitnami's documentation into conventions for my company - https://gist.github.com/so0k/f927a4b60003cedd101a0911757c605a

@viglesiasce
Copy link
Contributor Author

Checksum your configs in an annotation on your deployments so that config changes can automatically update your pods:

annotations:
        checksum/config: {{ include (print $.Chart.Name "/templates/configmap/spinnaker-config.yaml") . | sha256sum }}

@so0k
Copy link
Contributor

so0k commented Mar 8, 2017

I hadn't seen it before - thanks for the link!

So we can PR on those documents to add sample templates for default labels?

@mgoodness
Copy link
Contributor

That's also news to me. Give us maintainers a chance to review and decide if we want to adopt those as the best practices for this repo. We'll also want to consider all the suggestions/requests made in this thread.

@ghost
Copy link

ghost commented Mar 8, 2017

For first point of @mgoodness, I invite everyone to helm/helm#2007 for reviewing current state of the guide. I believe we have still time for little changes until 2.3 on that PR.

Then we should shape&move suggestions here by discussing over a new PR at kubernetes/helm.

Unsuprisingly, @so0k gist mostly overlap with best practices guide but also includes additional suggestions that can be added as additional parts to the guide. I believe it is a good starting point for our new PR.

@so0k if you are willing to work on this, we can work together on that new PR and everyone is welcome.

Our plan is maybe:

  • Extract differences between guide and gist (by talking on the comments of gist)
    • For each difference, discuss rationale, benefits, and drawbacks
  • Fork and write results cleanly
  • Open a PR and ref here to discuss with everyone
  • Scan this thread for different suggestions and ref them on PR

If anyone(esp. @so0k) want to join me (actively), I want to work on this together

@ghost
Copy link

ghost commented Mar 8, 2017

After best practices shaped more clearly, we can add suggestive comments and best practices to the default starter pack with another PR in the future.

IMHO, default starter pack should reflect what is written on the best practices guide closely.

@so0k That is better place to discuss including one template for recommended labels I mentioned before

@scottrigby
Copy link
Member

Hey everyone, where is the best place to address yaml indentation inconsistencies across the stable charts?

For example, is there a standard for YAML array indentation?

- One
- Two

vs

  - One
  - Two

Both are legit, but most docs I see across projects (eg Symfony) use the second. Does charts have a standard?

I was talking with @prydonius about YAML linting - he noted https://yamllint.readthedocs.io/en/latest/rules.html#module-yamllint.rules.hyphens (personally, I am an advocate for indentation: {spaces: 2, indent-sequences: consistent}, but really whatever the standard, as long as it’s consistent). Perhaps we could do as an automated PR check in the charts repo? But in either case, it seems this would be a good helper command in helm.

I noticed this was raised this comment #44 (comment) and also in helm/helm#1663. I sought this out as a response to #920. I'll be happy to follow up wherever is best, if anyone has a good idea of where to go with this?

@bacongobbler
Copy link
Member

Should this be closed with the introduction of https://docs.helm.sh/chart_best_practices/#the-chart-best-practices-guide?

@scottrigby
Copy link
Member

@bacongobbler That makes sense. And we can continue to discuss these topics in separate issues/PRs against https://github.com/kubernetes/helm/tree/master/docs/chart_best_practices?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 2, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 7, 2018
@kfox1111
Copy link
Collaborator

kfox1111 commented Feb 7, 2018

/remove-lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 7, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 8, 2018
@Starefossen
Copy link
Contributor

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 10, 2018
@stale
Copy link

stale bot commented Aug 19, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 19, 2018
@stale
Copy link

stale bot commented Sep 2, 2018

This issue is being automatically closed due to inactivity.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests