Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kustomize fails to handle two JSONPatch that add an item to the end of an array #642

Closed
shimmerjs opened this issue Dec 18, 2018 · 21 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@shimmerjs
Copy link

shimmerjs commented Dec 18, 2018

I have a series of array elements (in this case they are https://github.com/argoproj/argo task templates, which can be used independently or chained together) that I would like to keep in individual patches to maximize their reuse.

Each of these patches looks something like this:

- op: add
  path: /spec/templates/-
  value:
    name: clone-repo
    outputs:
      artifacts:
      - name: source
        path: /path
    container:
      image: alpine/git
      command: [sh, -c]
      args: ["git clone --branch {{workflow.parameters.branch}} --depth 1 https://$GITHUB_TOKEN@repo /path"]
      env:
      - name: GITHUB_TOKEN
        valueFrom:
          secretKeyRef:
            name: github-read-only
            key: token

Based on the JSONPatch spec:

A.16.  Adding an Array Value

   An example target JSON document:

   { "foo": ["bar"] }

   A JSON Patch document:

   [
     { "op": "add", "path": "/foo/-", "value": ["abc", "def"] }
   ]

   The resulting JSON document:

   { "foo": ["bar", ["abc", "def"]] }

/spec/templates/- should be added to the array of templates that already exists in a referenced Workflow resource. The issue comes when I try to apply two patches like the one above to the same array of templates. Intuitively I would expect /spec/templates/- to resolve into successive additions to the end of /spec/templates, one for each JSONPatch I add to my kustomization.yaml file.

Currently a collision occurs in this scenario:

Error: found conflict between different patches
&resource.Resource{Kunstructured:(*kunstruct.UnstructAdapter)(0xc0001041b8), b:0} doesn't deep equal &resource.Resource{Kunstructured:(*kunstruct.UnstructAdapter)(0xc000104308), b:0}

Are there any other suggested usage patterns for reuse at this level?

@shimmerjs
Copy link
Author

seems to be related to #638, maybe two different approaches to the same end goal. I did not use the strategic merge patch because these tasks are independent of a named resource.

@Liujingfang1
Copy link
Contributor

When multiple patches are applied to one object, Kustomize performs some tricks to make sure there are no conflicts among patches, say one patch changing the replicas to 2 and another patch changing the replicas to 3. This detection is done by applying patches in different orders and check if we still have the same object.

For the example here, the two patches applied are both appending an item to an array, say one is item2 and the other is item3. When kustomize checks conflicts, it sees two different objects as

# one object
- whatever item was originally there
- item2
- item3

# the other object
- whatever item was originally there
- item3
- item2

We can see the two arrays containing the same items, so they should be treated as the same object.

The fix is to allow array items to be in different orders when comparing two object. https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/transformers/multitransformer.go#L73

There is also a work around, use one patch to append multiple items.

@Liujingfang1 Liujingfang1 added kind/bug Categorizes issue or PR as related to a bug. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Mar 18, 2019
@wichert
Copy link

wichert commented Apr 8, 2019

One workaround is to have two patches apply changes to different ends of the array:

  • patch A adds to /spec/templates/0
  • patch B adds to /spec/templates/-

Of course this only works if you have two patches. If you need more patches you are still stuck.

@Benjamintf1
Copy link
Contributor

Why not instead define an order that patches will be applied in? This would remove all ambiguity, and would be important when ordering on lists does matter(which should be pretty much always, although is unfortunatly often not true in the kubernetes world)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 26, 2019
@wichert
Copy link

wichert commented Sep 26, 2019

This should not be marked stale I suspect

@jcassee
Copy link
Contributor

jcassee commented Sep 26, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 26, 2019
solsson added a commit to Yolean/kubernetes-kafka that referenced this issue Oct 8, 2019
I though that the feature would be neat for development (with replication factor 1, see #222)
but it causes just as much confusion and useless troubleshooting there,
for example race conditions between intentional topic creation and a container
starting up to produce to the topic. You actually never know which topic config you're getting.

Related: #107

The duplication is a workaround for kubernetes-sigs/kustomize#642
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 25, 2019
@wichert
Copy link

wichert commented Dec 25, 2019

Please keep this alive

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 24, 2020
@wichert
Copy link

wichert commented Jan 27, 2020

This is still not stale.

@wichert
Copy link

wichert commented Jan 27, 2020

@fejta can lifecycle be disabled for this ticket?

@aaroniscode
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 28, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 27, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@balopat
Copy link

balopat commented Jun 27, 2020

/reopen

@k8s-ci-robot
Copy link
Contributor

@balopat: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@afirth
Copy link
Contributor

afirth commented Sep 9, 2020

@Liujingfang1 thanks for the explanation
this bit me several times trying to add arguments to a podSpec from upstream
I know it's stale, but AFAICT it's not fixed. reopen?

@garreeoke
Copy link

@booninite Ever find a solution to this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests