-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Patching of resource names #4512
Comments
@cpressland: This issue is currently awaiting triage. SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Kustomize intentionally does not support unstructured edits, including through variables; please see our Eschewed Features document. One view of the underlying problem here could be that the namespace transformer should know that the namespace name is present in that particular location in your CR. That is currently possible to do via the transformers field, but only when the target is the entire field value: https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#usage-via-transformers-field-3. @natasha41575 pointed out that Replacements have a delimiter option that would be sufficiently granular to support your example if we enabled it in FieldSpec more generally. If implemented, your config could look something like this: transformers:
- |-
apiVersion: builtin
kind: NamespaceTransformer # this is the "advanced" version of the `namespace: bar` field
metadata:
namespace: bar
fieldSpecStrategy: merge # depends on https://github.com/kubernetes-sigs/kustomize/pull/4461
fieldSpecs:
- path: metadata/name
kind: ServiceProfile
group: linkerd.io
options:
delimiter: '.'
index: 1 What do you think of that potential solution? The fieldSpecs could then also be provided via the Configurations field for the time being to enable sharing across multiple Kustomizations. Please note that we are reconsidering how that will work in the future due to its overlap with the functionality of the /triage under-consideration |
I've been doing this with a kustomize var - in this case called JOB_VERSION metadata: works great and is a solid usecase for keeping vars imo. for me its a great way to modify the name of a job based on the version of the container its running. avoids the problem of having to delete old jobs when running new versions and maintains DRY config. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is your feature request related to a problem? Please describe.
Linkerd implements a Custom Resource Definition called "Linkerd Service Profiles", requests passed through the Linkerd proxy are matched to Kubernetes Services using these Linkerd Service Profiles via the Services FQDN.
If we create a Service in the
default
namespace calledfoo
then we'd need to write a Linkerd Service Profile with a name offoo.default.svc.cluster.local
, documentation here.Kustomize allows me to override both my Service and Linkerd Service Profile to deploy into a different namespace, in this example, we'll use the
bar
namespace. As such, I now have a service with an FQDN offoo.bar.svc.cluster.local
, but Kustomize still deploys my Linkerd Service Profile asfoo.default.svc.cluster.local
, while it is in the correct namespace, it has the wrong resource name.Describe the solution you'd like
The best idea I can come up with is that Kustomize could look up other values within the output YAML at compile time to figure out what an appropriate value might be.
If my Resource name was
foo.${.metadata.namespace}.svc.cluster.local
Kustomize could dynamically replace this value during deployment based on whatever the value of.metadata.namespace
is.Alternatively, Kustomize could have a list of known variables which could be substituted, example:
foo.${kustomize_namespace}.svc.cluster.local
. By extension, the kustomization file itself could gain a variables section for setting any value dynamically.Describe alternatives you've considered
As we're deploying with Flux 2, I have considered using its own
.spec.postBuild
substitution features, but this feels like it's bringing the solution into the wrong product. In a non-flux2 deployment example, I'd likely use a simplesed
to perform the substitution.Alternatively, we could use a JSON patch in our
kustomization.yaml
similar to the below:While this functionally works, it's a lot of duplicate code to add to every definition of a deployment which goes to a non-default namespace and negates the benefit of just setting
namespace: bar
The text was updated successfully, but these errors were encountered: