-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kustomize fails to handle two JSONPatch that add an item to the end of an array #642
Comments
seems to be related to #638, maybe two different approaches to the same end goal. I did not use the strategic merge patch because these tasks are independent of a named resource. |
When multiple patches are applied to one object, Kustomize performs some tricks to make sure there are no conflicts among patches, say one patch changing the replicas to 2 and another patch changing the replicas to 3. This detection is done by applying patches in different orders and check if we still have the same object. For the example here, the two patches applied are both appending an item to an array, say one is item2 and the other is item3. When kustomize checks conflicts, it sees two different objects as
We can see the two arrays containing the same items, so they should be treated as the same object. The fix is to allow array items to be in different orders when comparing two object. https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/transformers/multitransformer.go#L73 There is also a work around, use one patch to append multiple items. |
One workaround is to have two patches apply changes to different ends of the array:
Of course this only works if you have two patches. If you need more patches you are still stuck. |
Why not instead define an order that patches will be applied in? This would remove all ambiguity, and would be important when ordering on lists does matter(which should be pretty much always, although is unfortunatly often not true in the kubernetes world) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This should not be marked stale I suspect |
/remove-lifecycle stale |
I though that the feature would be neat for development (with replication factor 1, see #222) but it causes just as much confusion and useless troubleshooting there, for example race conditions between intentional topic creation and a container starting up to produce to the topic. You actually never know which topic config you're getting. Related: #107 The duplication is a workaround for kubernetes-sigs/kustomize#642
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Please keep this alive |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This is still not stale. |
@fejta can lifecycle be disabled for this ticket? |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@balopat: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@Liujingfang1 thanks for the explanation |
@booninite Ever find a solution to this? |
I have a series of array elements (in this case they are https://github.com/argoproj/argo task templates, which can be used independently or chained together) that I would like to keep in individual patches to maximize their reuse.
Each of these patches looks something like this:
Based on the JSONPatch spec:
/spec/templates/-
should be added to the array of templates that already exists in a referenced Workflow resource. The issue comes when I try to apply two patches like the one above to the same array of templates. Intuitively I would expect/spec/templates/-
to resolve into successive additions to the end of/spec/templates
, one for each JSONPatch I add to mykustomization.yaml
file.Currently a collision occurs in this scenario:
Are there any other suggested usage patterns for reuse at this level?
The text was updated successfully, but these errors were encountered: