Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Patching of resource names #4512

Closed
cpressland opened this issue Mar 9, 2022 · 7 comments
Closed

Patching of resource names #4512

cpressland opened this issue Mar 9, 2022 · 7 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/under-consideration

Comments

@cpressland
Copy link

cpressland commented Mar 9, 2022

Is your feature request related to a problem? Please describe.

Linkerd implements a Custom Resource Definition called "Linkerd Service Profiles", requests passed through the Linkerd proxy are matched to Kubernetes Services using these Linkerd Service Profiles via the Services FQDN.

If we create a Service in the default namespace called foo then we'd need to write a Linkerd Service Profile with a name of foo.default.svc.cluster.local, documentation here.

Kustomize allows me to override both my Service and Linkerd Service Profile to deploy into a different namespace, in this example, we'll use the bar namespace. As such, I now have a service with an FQDN of foo.bar.svc.cluster.local, but Kustomize still deploys my Linkerd Service Profile as foo.default.svc.cluster.local, while it is in the correct namespace, it has the wrong resource name.

Describe the solution you'd like

The best idea I can come up with is that Kustomize could look up other values within the output YAML at compile time to figure out what an appropriate value might be.

If my Resource name was foo.${.metadata.namespace}.svc.cluster.local Kustomize could dynamically replace this value during deployment based on whatever the value of .metadata.namespace is.

Alternatively, Kustomize could have a list of known variables which could be substituted, example: foo.${kustomize_namespace}.svc.cluster.local. By extension, the kustomization file itself could gain a variables section for setting any value dynamically.

Describe alternatives you've considered

As we're deploying with Flux 2, I have considered using its own .spec.postBuild substitution features, but this feels like it's bringing the solution into the wrong product. In a non-flux2 deployment example, I'd likely use a simple sed to perform the substitution.

Alternatively, we could use a JSON patch in our kustomization.yaml similar to the below:

---
resources:
  - service.yaml
  - linkerdserviceprofile.yaml

namespace: bar

patches:
  - target:
      kind: ServiceProfile
    patch: |-
      - op: replace
        path: /metadata/name
        value: foo.bar.svc.cluster.local

While this functionally works, it's a lot of duplicate code to add to every definition of a deployment which goes to a non-default namespace and negates the benefit of just setting namespace: bar

@cpressland cpressland added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 9, 2022
@k8s-ci-robot
Copy link
Contributor

@cpressland: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 9, 2022
@KnVerey KnVerey self-assigned this Mar 16, 2022
@KnVerey
Copy link
Contributor

KnVerey commented Mar 16, 2022

Kustomize intentionally does not support unstructured edits, including through variables; please see our Eschewed Features document.

One view of the underlying problem here could be that the namespace transformer should know that the namespace name is present in that particular location in your CR. That is currently possible to do via the transformers field, but only when the target is the entire field value: https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#usage-via-transformers-field-3. @natasha41575 pointed out that Replacements have a delimiter option that would be sufficiently granular to support your example if we enabled it in FieldSpec more generally. If implemented, your config could look something like this:

 transformers:
- |-
  apiVersion: builtin
  kind: NamespaceTransformer # this is the "advanced" version of the `namespace: bar` field
  metadata:
    namespace: bar
  fieldSpecStrategy: merge # depends on https://github.com/kubernetes-sigs/kustomize/pull/4461
  fieldSpecs:
  - path: metadata/name
    kind: ServiceProfile
    group: linkerd.io
    options:
      delimiter: '.'
      index: 1

What do you think of that potential solution?

The fieldSpecs could then also be provided via the Configurations field for the time being to enable sharing across multiple Kustomizations. Please note that we are reconsidering how that will work in the future due to its overlap with the functionality of the openapi and crds fields: see #3945 and #3944 (comment)

/triage under-consideration
/kind feature

@KnVerey KnVerey removed their assignment Mar 16, 2022
@KnVerey KnVerey added triage/under-consideration and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 16, 2022
@jeacott1
Copy link

jeacott1 commented Jun 3, 2022

I've been doing this with a kustomize var - in this case called JOB_VERSION

metadata:
name: mycustomisablename-$(JOB_VERSION)

works great and is a solid usecase for keeping vars imo. for me its a great way to modify the name of a job based on the version of the container its running. avoids the problem of having to delete old jobs when running new versions and maintains DRY config.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 1, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 1, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/under-consideration
Projects
None yet
Development

No branches or pull requests

5 participants