-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl apply
(client-side) removes all entries when attempting to remove a single duplicated entry in a persisted object
#58477
Comments
@kubernetes/sig-api-machinery-bugs |
@tmszdmsk: Reiterating the mentions to trigger a notification: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@mengqiy likely due to strategic patch computation sending a "remove x" patch |
the envvar name is supposed to be the unique key of items in the list, yet the apiserver allowed a duplicate to be persisted in the first place. that's likely the cause of the bug |
It sounds like the validation is inconsistent with the schema's merge key. It should either not construct SMP with the env name as a key, or it shouldn't let you specify the same env var twice. @jennybuckley would you like to look at the validation to see if it is doing the right thing? Is it intentional that people can put the same var in the list multiple times? |
Otherwise, to prevent the inconsistency proactively, How about having a |
@yue9944882 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
This is most likely due to strategic merge patch not handling duplicated keys correctly, see #65106 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Is there a workaround for this that doesn't involve deleting the deployment? |
Use edit, patch, or replace, instead of apply. |
@kiyutink for me there was.
|
same issue with 1.20.6
I found the problem, after I remove the duplicateEnvVars, It works |
Just happened to me on both 1.25 and 1.26 I tried editing the deployment and deleting just the duplicated var without modifying the existing one - but same result - both got completely removed ORIGINAL:
EXPECTED:
ACTUAL
Resolved after re-applying |
kubectl apply
removes all entries when attempting to remove a single duplicated entry in a persisted objectkubectl apply
(client-side) removes all entries when attempting to remove a single duplicated entry in a persisted object
same issue with 1.25.3. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
Lists in API objects can define a named property that should act as a "merge key". The value of that property is expected to be unique for each item in the list. However, gaps in API validation allow some types to be persisted with multiple items in the list sharing the same value for a mergeKey property.
The algorithm used by
kubectl apply
detects removals from a list based on the specified key, and communicates that removal to the server using a delete directive, specifying only the key. When duplicate items exist, that deletion directive is ambiguous, and the server implementation deletes all items with that key.Known API types/fields which define a mergeKey but allow duplicate items to be persisted:
PodSpec (affects all workload objects containing a pod template):
hostAliases
(kubectl apply
drops all hostAlias entries when removing a duplicate entry from an existing object #91670)imagePullSecrets
(kubectl apply removes all imagePullSecrets when user attempts to remove duplicate secrets #91629)containers[*].env
(this issue, Failed to create three way merge patch when container environment variable specified multiple times #86163, Deployment objects using optional envs with the same name are unexpectedly removed during updates - but exist out of the box #93266, Removing a duplicate environment variable from the container spec ends up deleting the environment var entirely #106809, Env variable is missing in container, after cleanup of duplications in deployment #121541, Upgrade StatefulSet, key missing in env #122121)containers[*].ports
(Container Ports for pods vanishing #86273, Duplicate ports in deployment results in non-deterministic port assignment #93952, Deployment containerPort Duplicate value issue #113246)volumes
(kubectl apply admits Deployment with duplicate volumes then fails when renaming duplicate #78266)(Change merge key for VolumeMount to mountPath #35071 changed the merge key from name to mountPath, which was a breaking change, but mountPath is at least required to be unique)containers[*].volumeMounts
Service
ports
(name+protocol required to be unique on create in Validate if service has duplicate port #47336, but still has issues on update in apiserver allows duplicate service port #59119, No error or warning message when service modified with partially success. #97883, and mergeKey is still only name, xref PATCH merges Services with same port, different protocol #47249)Original report
===
What happened:
For
deployment
resource:A container has defined environment variable with name
x
that is duplicated (there are two env vars with the same name, the value is also the same).When you fix the
deployment
resource descriptor so that environment variable with namex
appears only once and push it withkubectl apply
, deployment with no environment variable namedx
is created and therefore no environment variable namedx
is passed to replica set and pods.What you expected to happen:
After fixing the
deployment
, environment variable with namex
is defined in thedeployment
once .How to reproduce it (as minimally and precisely as possible):
kubectl apply
itkubectl apply
itkubectl get deployment/your-deployment -o yaml
prints deployment withoutAnything else we need to know?:
nope
Environment:
kubectl version
):Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T20:00:41Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:40:06Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
uname -a
): N/AThe text was updated successfully, but these errors were encountered: