Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kustomize does not merge images from resources #5041

Closed
MaurGi opened this issue Feb 8, 2023 · 8 comments
Closed

Kustomize does not merge images from resources #5041

MaurGi opened this issue Feb 8, 2023 · 8 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@MaurGi
Copy link

MaurGi commented Feb 8, 2023

What happened?

In the kustomization.yaml, the images tag of the bases folder is winning over the images: defined in the resources folder.

Why is that? This is the opposite of the other resources when we do strategic merge - can we force this with a patchStrategicMerge?

To repro, run:
kustomize build resources/dev
with the files below.

What did you expect to happen?

Since images is defined in resources, it should win over the images in the base.
How can you override that otherwise?

If I try to override the image directly in the resources kustomization with, I get the following error:

Error: wrong Node Kind for  expected: SequenceNode was MappingNode: value: {image: nginx:latest}

I assume kustomization happens on the image and then the images: tag does not find the nginx:latest to replace.

This makes it impossible to have a default image and then override in the resources, I have to make a resources for my base case?

thx.

How can we reproduce it (as minimally and precisely as possible)?

# bases/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - deployment.yaml

images:
- name: nginx:latest
  newName: PRODREGISTRY/acs/nginx
  newTag: PRODVERSION
# bases/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
# resources/dev/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../bases

images:
- name: nginx:latest
  newName: DEVREGISTRY/acs/nginx
  newTag: DEVVERSION

Expected output

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: DEVREGISTRY/acs/nginx:DEVVERSION
        name: nginx
        ports:
        - containerPort: 80

Actual output

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: PRODREGISTRY/acs/nginx:PRODVERSION
        name: nginx
        ports:
        - containerPort: 80

Kustomize version

{Version:kustomize/v4.5.4 GitCommit:cf3a452ddd6f83945d39d582243b8592ec627ae3 BuildDate:2022-03-28T23:12:45Z GoOs:linux GoArch:amd64}

Operating system

Linux

@MaurGi MaurGi added the kind/bug Categorizes issue or PR as related to a bug. label Feb 8, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Feb 8, 2023
@annasong20
Copy link
Contributor

Hi @MaurGi,

I assume kustomization happens on the image and then the images: tag does not find the nginx:latest to replace.

Yes, you are correct! Please see #4581 (comment) for further explanation.

Why is that? This is the opposite of the other resources when we do strategic merge - can we force this with a patchStrategicMerge?

All kustomizations, including patches and images should behave sequentially, where the base is applied before the overlay. I believe the difference in behavior that you see for patches is because the content of the patch (say kind) that identifies the target resource wasn't changed by the base kustomization.yaml. Therefore, your patches can be applied both in your base and overlay (resources/dev) and the overlay will "win" because it's applied later. The problem with images is that it looks for the target image with the name field value that you provide, and because the image name or tag was changed by the base, kustomize cannot find the image to operate on.

@annasong20
Copy link
Contributor

Since images is defined in resources, it should win over the images in the base. How can you override that otherwise?

Canonically, you'd override the base image by specifying in the overlay

images:
- name: PRODREGISTRY/acs/nginx:PRODVERSION
  newName: DEVREGISTRY/acs/nginx
  newTag: DEVVERSION

If I try to override the image directly in the resources kustomization with, I get the following error:

Error: wrong Node Kind for  expected: SequenceNode was MappingNode: value: {image: nginx:latest}

I might be able to help if you clarify what you mean by "override the image directly in the resources kustomization with".

This makes it impossible to have a default image and then override in the resources, I have to make a resources for my base case?

Could you also clarify what you mean by "make a sresources for my base case"? You can override the image if you know the path to this image. For example, in your case, if you only want to change the spec/template/spec/containers/image in Deployment, you can use patches, replacements, and other kustomize functionality.

@annasong20
Copy link
Contributor

/kind support
/triage duplicate

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. triage/duplicate Indicates an issue is a duplicate of other open issue. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 4, 2023
@annasong20
Copy link
Contributor

annasong20 commented Mar 17, 2023

Hi @MaurGi, I've spent some more time thinking and consulted @KnVerey about workarounds for your use case. I'd like to propose the following, in addition to my earlier comments.

If you know the field paths that reference your image, you can either

  • if this is a one-time operation for a very small number of field paths, say 1, you can use patches to change the value of the image field path value without knowing its current value (changed by the base images field).
  • otherwise, use a neat trick where you define a custom resource whose name is the image, direct your changes to the image to the name of this custom resource, use the nameReference transformer to propagate the name change to all field paths that reference your image, and lastly delete the custom resource so that your output only contains the original resources you care about. Note that you will need 1 custom resource per distinct target image. You can accomplish the above with the following configurations.
    base
    - kustomization.yaml
    - deployment.yaml
    - custom-resource.yaml
    - name-reference.yaml # need this file in every kustomization that you potentially call `build` on
    overlay
    - kustomization.yaml
    - name-reference.yaml # optional since present in base
    
    # base/kustomization.yaml
    resources:
    - custom.yaml # custom resource representing image
    - deployment.yaml
    configurations:
    - name-reference.yaml # configure name reference transformer to change field path values that reference image once custom resource name (image) changes
    patches:
    - target:
        kind: CustomResource # resource in custom.yaml
        name: nginx:latest # useful if targeting multiple images
      patch: |
        - op: replace
          path: /metadata/name
          value: PRODREGISTRY/acs/nginx:PRODVERSION
    # base/custom.yaml
    apiVersion: custom.group/v1
    kind: CustomResource
    metadata:
      annotations:
        config.kubernetes.io/local-config: "true" # to remove this resource in output
      name: nginx:latest
    # base/name-reference.yaml
    nameReference:
    - kind: CustomResource
      fieldSpecs:
      - kind: Deployment
        path: spec/template/spec/containers[]/image
    # overlay/kustomization.yaml
    resources:
    - ../base
    patches:
    - target:
        kind: CustomResource
        name: nginx:latest # works because patches recognizes old names
      patch: |
        - op: replace
          path: /metadata/name
          value: DEVREGISTRY/acs/nginx:DEVVERSION

If you don't know the field paths that reference your image, you can either

The ability for images to recognize old names would be a new feature. If these workarounds still don't suffice, feel free to continue to follow up.

/remove-kind bug

@k8s-ci-robot k8s-ci-robot removed the kind/bug Categorizes issue or PR as related to a bug. label Mar 17, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 16, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 16, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 19, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

4 participants