-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(core): Upgrade API kubernetes 1.28 and controller-runtime to 0.16 #5321
Conversation
options := cache.Options{ | ||
ByObject: selectors, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to include those selector somewhere for caching reason. It seems this is no longer applied anywhere, or am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, something got lost here. I will add it again.
Is this expected to be merged for Camel K 2.3 ? |
5823203
to
f065ed9
Compare
According to the kubernetes-version-compatibility, it should still be compatible with Kubernetes API version N-3 or N-4 |
For now this is for main, so post 2.3. Before backporting I need to run the test on API 1.27 cluster and Openshift 4.14.
The test needs to run on API 1.27 cluster and Openshift 4.14 because other dependency upgrade that come with it, like the one on client-go, could have an impact somewhere, and the compatibility is not 100% (see https://github.com/kubernetes/client-go#compatibility-client-go---kubernetes-clusters). This could be a good opportunity to see if I can add some github action workflow to allow on demand run on specific kindest/node images. After this one is fully done there should be the upgrade for API 1.29 and controller-runtime 0.17. |
I don't think we should backport this to 2.3 as it may pose some compatibility problems with the existing running platform. Ie, the user is running a 2.3.0 version and the upgrade to 2.3.1 should work on the very same platform he's running 2.3.0. |
@@ -201,22 +202,27 @@ func Run(healthPort, monitoringPort int32, leaderElection bool, leaderElectionID | |||
} | |||
} | |||
|
|||
defaultNamespaces := map[string]cache.Config{ | |||
operatorNamespace: {}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're probably missing to include the watchNamespace
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a little more complex than that. As per this discussion the CRDs are expected to be installed when we configure the cache.
I think I may need to remove &servingv1.Service{}
from the selectors as it is not always installed. I could do conditional addition of &servingv1.Service{}
but that suppose any change in the presence of the CRD will mean re-creation of the operator pod(s).
f065ed9
to
a0ef631
Compare
@gansheer may I suggest a different strategy? Whenever we upgrade dependency, we better do it one by one instead of doing one shot, in order to understand which is the dependency that really fails. IMO, the first thing to do is to upgrade kubernetes only. We can bump to 1.28 and later to 1.29. Once we are sure that all is stable, we can bump any other dependency. |
Ref #5211
Ref #5307
Upgrade API kubernetes 1.28 and controller-runtime to 0.16
Release Note