-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FakeClient update on status subresource not returning error despite conflicting resource version since v0.15.0 #2362
Comments
Agree, sounds like a bug |
conflicts Close kubernetes-sigs#2362 Signed-off-by: iiiceoo <[email protected]>
/assign |
conflicts The fake client of subresource is unable to correctly handle the case of resource version conflict when updating. The phenomenon is that it did not return a 409 status error. Close kubernetes-sigs#2362 Signed-off-by: iiiceoo <[email protected]>
conflicts The fake client of subresource is unable to correctly handle the case of resource version conflict when updating. The phenomenon is that it did not return a 409 status error. Close kubernetes-sigs#2362 Signed-off-by: iiiceoo <[email protected]>
Hello, I encountered the exact same issue. While I'm waiting for a fix to be released, I rolled back my controller-runtime to use 0.14.6 (with all k8s go libs using 0.26.1) and it works for now... |
@sbueringer this did not work entirely for us. With the new changes we still had to explicitly add our crds as Statussubresurce to be able to use status updates. Generating the client and adding a resource with client.Create() does not work as before. |
Same issue using #2365 didn't fix the issue
|
/kind bug With the release of v0.16.0 did this not work? I see above that the #2365 wasn't successful but wanted to know if anything else in that release fixed this issue. |
Re-doing this test like this: func Test_FakeStatusUpdate(t *testing.T) {
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test",
Name: "test",
ResourceVersion: "1",
},
}
c := fake.NewClientBuilder().
WithRuntimeObjects(pod).WithStatusSubresource(pod).
Build()
pod.Status.Phase = "testphase"
pod.ResourceVersion = "2"
err := c.Status().Update(context.Background(), pod)
if err == nil {
t.Fatal("Expected conflict error, but got nil")
}
} now passes with the intended error as before. The snippet was updated here Doing a subsequent Get of the pod will show that the pod is not updated with no This test is running against controller-runtime v0.16.0 and go1.20 Does this help @nbam-e ? |
@troy0820 I think some of the other commenters here are referring to a different behavior change when using the fake client with CRDs. Since controller-runtime 0.15.0 you have to manually add the |
There is another thread around here describing the reason why this is now necessary and the inference of the subresource on resources that are not core are said to be difficult to assume. In controller-runtime the function However supplementing the builder to have this on instantiation with The issue I have been observing is if you are to create a resource, (You typically don't put a status on them just spec) and then try to update the status of the object you just created (without building it in the client with subresource), you'll get a I'm looking to see how to resolve that, but I fear that this can't be done because it can't see if it is supposed to have a subresource before you try to update the status. |
Signed-off-by: David J. M. Karlsen <[email protected]>
Signed-off-by: David J. M. Karlsen <[email protected]>
* Upgrade operator utils Signed-off-by: David J. M. Karlsen <[email protected]> * regen manifests Signed-off-by: David J. M. Karlsen <[email protected]> * fix test, see: kubernetes-sigs/controller-runtime#2362 (comment) Signed-off-by: David J. M. Karlsen <[email protected]> --------- Signed-off-by: David J. M. Karlsen <[email protected]> Co-authored-by: David J. M. Karlsen <[email protected]>
With the upgrade to controller-runtime v0.15 we need to supplement the fake builder to contain instantiation `WithStatusSubresource` in order to be able to update status of the object that we have created. In previous versions of controller-runtime this was not necessary because the semantics was ignored but in v0.15 the fake client cannot infer the status subresource without its explicit instantiation. Ref.: kubernetes-sigs/controller-runtime#2362 Signed-off-by: Mat Kowalski <[email protected]>
With the upgrade to controller-runtime v0.15 we need to supplement the fake builder to contain instantiation `WithStatusSubresource` in order to be able to update status of the object that we have created. In previous versions of controller-runtime this was not necessary because the semantics was ignored but in v0.15 the fake client cannot infer the status subresource without its explicit instantiation. Ref.: kubernetes-sigs/controller-runtime#2362 Signed-off-by: Mat Kowalski <[email protected]>
With the upgrade to controller-runtime v0.15 we need to supplement the fake builder to contain instantiation `WithStatusSubresource` in order to be able to update status of the object that we have created. In previous versions of controller-runtime this was not necessary because the semantics was ignored but in v0.15 the fake client cannot infer the status subresource without its explicit instantiation. Ref.: kubernetes-sigs/controller-runtime#2362 Signed-off-by: Mat Kowalski <[email protected]>
With the upgrade to controller-runtime v0.15 we need to supplement the fake builder to contain instantiation `WithStatusSubresource` in order to be able to update status of the object that we have created. In previous versions of controller-runtime this was not necessary because the semantics was ignored but in v0.15 the fake client cannot infer the status subresource without its explicit instantiation. Ref.: kubernetes-sigs/controller-runtime#2362 Signed-off-by: Mat Kowalski <[email protected]>
I've came up with a hacky workaround that solved the gap for me |
Is there another workaround for this? my test is trying to check a status change on a reconcile, meaning i cannot provide Is there a workaround? or another client that i can use? In older versions (0.13.x) it worked as expected why did that flow changed? |
The original issue was fixed and the latter discussion seems to evolve around the question of how to configure a CRD to have a status subresource in the fake client. The tl,dr on how to configure the fakeclient to provide a status subresource for a given resource is to do the following:
If this is needed or not can not be automatically inferred, refer to #2386 (comment) as for why. As the original issue was fixed, I am going to close this. Please limit discussions on this issue to the bug it was created for and that was since fixed. |
# Context: When updating the status of the applicationset object it can happen that it fails due to a conflict since the resourceVersion has changed due to a different update. This makes the reconcile fails and we need to wait until the following reconcile loop until it updates the relevant status fields and hope that the update calls don't fail again due a conflict. It can even happen that it gets stuck constantly due to this erriors. A better approach I would say is retrying when there is a conflict error with the newest version of the object, so we make sure we update the object with the latest version always. This has been raised in issue argoproj#19535 that failing due to conflicts can make the reconcile not able to proceed. # What does this PR? - Wraps all the `Update().Status` calls inside a retry function that will retry when the update fails due a conflict. - Adds appset to fake client subresources, if not the client can not correctly determine the status subresource. Refer to: kubernetes-sigs/controller-runtime#2386, and kubernetes-sigs/controller-runtime#2362. Signed-off-by: Carlos Rejano <[email protected]>
# Context: When updating the status of the applicationset object it can happen that it fails due to a conflict since the resourceVersion has changed due to a different update. This makes the reconcile fails and we need to wait until the following reconcile loop until it updates the relevant status fields and hope that the update calls don't fail again due a conflict. It can even happen that it gets stuck constantly due to this erriors. A better approach I would say is retrying when there is a conflict error with the newest version of the object, so we make sure we update the object with the latest version always. This has been raised in issue argoproj#19535 that failing due to conflicts can make the reconcile not able to proceed. # What does this PR? - Wraps all the `Update().Status` calls inside a retry function that will retry when the update fails due a conflict. - Adds appset to fake client subresources, if not the client can not correctly determine the status subresource. Refer to: kubernetes-sigs/controller-runtime#2386, and kubernetes-sigs/controller-runtime#2362. Signed-off-by: Carlos Rejano <[email protected]>
* fix(appset): Retry on conflict when updating status # Context: When updating the status of the applicationset object it can happen that it fails due to a conflict since the resourceVersion has changed due to a different update. This makes the reconcile fails and we need to wait until the following reconcile loop until it updates the relevant status fields and hope that the update calls don't fail again due a conflict. It can even happen that it gets stuck constantly due to this erriors. A better approach I would say is retrying when there is a conflict error with the newest version of the object, so we make sure we update the object with the latest version always. This has been raised in issue #19535 that failing due to conflicts can make the reconcile not able to proceed. # What does this PR? - Wraps all the `Update().Status` calls inside a retry function that will retry when the update fails due a conflict. - Adds appset to fake client subresources, if not the client can not correctly determine the status subresource. Refer to: kubernetes-sigs/controller-runtime#2386, and kubernetes-sigs/controller-runtime#2362. Signed-off-by: Carlos Rejano <[email protected]> * fixup! fix(appset): Retry on conflict when updating status --------- Signed-off-by: Carlos Rejano <[email protected]> Signed-off-by: carlosrejano <[email protected]> Co-authored-by: Carlos Rejano <[email protected]>
After upgrading to controller-runtime 0.15.0 one of our unit tests started failing.
The tested controller performs a
client.Status().Update(...)
on Pod resources. The now failing unit test aims to test the controller behavior in case of a resource conflict (i.e. The fake client has a pod object, the controller performs a status update on the same pod object, but with a different resource version -> expected behavior: Conflict error, actual behavior since upgrading to 0.15.0: no error).This can be reproduced with the following testcase: (controller-runtime 0.15.0, k8s api 0.27.2)
I think it happens because the result of SetResourceVersion is immediately overwritten again by fromMapStringAny.
There appears to be a unit test for this case, but it doesn't fail because it uses
client.Update(...)
instead ofclient.Status().Update(...)
. https://github.com/kubernetes-sigs/controller-runtime/blob/30eae58f1b984c1b8139dd9b9f68dd2d530ed429/pkg/client/fake/client_test.go#LL1441C2-L1441C2Related PR: #2259
The text was updated successfully, but these errors were encountered: