-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploy K8s kustomizations Without Wrapping In A Helm Chart #1337
Comments
For this (stated to @wirewc) we may want to make this apply for all manifests since that may be easier technically and would provide more flexibility on standard manifests as well |
We will also have to think about how this affects the package lifecycle and user messaging on the key (since we use helm right now to uninstall for example and this may break that) |
This seems to indicate that Helm's parser (within API, I assume) is getting 👋 John Snow here, but it looks like the problem is the values are resulting in Helm getting content it can not template. You may need to add escape sequences. |
Some context to the above comment: Kustomize and Helm both make use of the default markers from I don't know enough about the order of operations involved here, but I wonder about precisely what ordering is occurring to result in this behavior. The report indicates that additionalPrometheusRulesMap was supplied, likely with the My immediate question would be what happens when the chart is rendered with |
🤔 Taking that into account ... yeah. I can see why Helm barked at it. The raw content is expected in the final YAML placed into k8s, and then consumed by Flux' HelmRelease object. |
So it does look like kustomize does get run before being packaged by helm at least. I'm still shocked nothing barked until it got to helm. Regardless, the next step getting and isolated test case with @UncleGedd so we can iterate fast over changes as we poke through. Because of how Zarf works, every deployment has to go through helm. |
It looks like the Kustomize <-> Helm part seems to be working well. @UncleGedd and I will be touching base to dive into the issue further to find root cause. It looks like the issue is just the end of the deployment is failing and might be more related downstream from the helm. |
Wrote a failing e2e test, now trying to figure out how to have Helm not care about that perceived |
Ok, so it turns out that Prometheus templating in Helm is a relatively common problem, check out:
I was able to replicate the issue with a barebones Helm chart with a ConfigMap containing a single Prometheus rule:
Helm throws that same error:
Reading those 2 links above, the recommended solution is to just change the templating on the chart's side. Although it's super curious that the Flux HelmRelease is totally fine with that
|
While I do not see this as a Zarf issue, I'm more curious of how often we'll see this issue in the future. If the occurrence seems likely, then we should make an attempt to address it within Zarf or at the very least document how to resolve the problem. |
Adding some more context to this conversation. I'm deploying a Big Bang cluster using Zarf in a connected env (using YOLO mode). Many of the use cases that Zarf was built for, we aren't using. What I'm interested in is using Zarf as a way to do declarative deployments (including declarative upgrades). The problem my team and I face is that many of our platform upgrade steps are done either via a bash script or by manually interacting with the cluster. I'd like to use Zarf to declaratively define what upgrades look like.
100% agree @Racer159. For my use case, I'm fine with breaking the |
After a brief sync meeting, it looks like Zarf Actions would be a great place to addressing this specific deployment issue. Relevant Context:
Additional example of Zarf Actions outside of the examples folder can be found in the IaC repository. |
## Description Wrap manifest/kustomizations in string literals within the helm chart generation process if needed to avoid nested go template messes. Supersedes #1347. ## Related Issue Fixes #1337 ## Type of change - [x] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Other (security config, docs update, etc) ## Checklist before merging - [x] Test, docs, adr added or updated as needed - [x] [Contributor Guide Steps](https://github.com/defenseunicorns/zarf/blob/main/CONTRIBUTING.md#developer-workflow) followed
## Description Wrap manifest/kustomizations in string literals within the helm chart generation process if needed to avoid nested go template messes. Supersedes #1347. ## Related Issue Fixes #1337 ## Type of change - [x] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Other (security config, docs update, etc) ## Checklist before merging - [x] Test, docs, adr added or updated as needed - [x] [Contributor Guide Steps](https://github.com/defenseunicorns/zarf/blob/main/CONTRIBUTING.md#developer-workflow) followed
Is your feature request related to a problem? Please describe.
I'm trying to deploy a Big Bang variant using a kustomization inside of a Zarf package. The Helm chart that Zarf wraps my kustomization with fails to install due a templating error. Note that the kustomization builds correctly with
kustomize build <dir>
Example of my Zarf pkg
We have determined that the Helm chart fails to install due to a templating error related to the following values for the Big Bang Monitoring chart:
Zarf fails with the error message:
That
$labels
variable is inside of a ConfigMap that is passed into a Flux HelmRelease. I believe when we wrap this in a Helm chart, Helm believes it's a local Helm variable, resulting in the above error.Describe the solution you'd like
I'd like the option to install the kustomization directly (like with
kustomization build <dir> | kubectl apply -f -
) instead of wrapping it in a Helm chart. Specifcally, I want to specify in the Zarf package to not wrap the kustomization in a Helm chartThe text was updated successfully, but these errors were encountered: