Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump lowest supported k8s version to 1.23 #2145

Closed
pavolloffay opened this issue Sep 22, 2023 · 15 comments · Fixed by #2162
Closed

Bump lowest supported k8s version to 1.23 #2145

pavolloffay opened this issue Sep 22, 2023 · 15 comments · Fixed by #2162

Comments

@pavolloffay
Copy link
Member Author

cc) @open-telemetry/operator-maintainers

@jaronoff97
Copy link
Contributor

I believe the lowest support version for the old PDB and HPA is 1.25, not 1.23

I'm not sure we can bump to that yet, right?

Screenshot 2023-09-22 at 12 46 34 PM

@JorTurFer
Copy link
Contributor

We don't need to support both until 1.25, I mean, in k8s 1.23 policy/v1 and autoscaling/v2 are available, so we can query the api server using them even if there are still the other apis available yet. Internally, the API server will translate the apis, but that's something internal inside the api server

@JorTurFer
Copy link
Contributor

JorTurFer commented Sep 22, 2023

During the api versions overlap, you can use them as you want. The api server will store in the "default" api version, but that's something internal. We don't need to take care about that part, if both api versions are supported, we can just use the new one and it will work

@JorTurFer
Copy link
Contributor

For instance, KEDA v2.9 works on k8s >= 1.23 because we switched to autoscaling/v2 and KEDA v2.8 works on k8s <= 1.25 because it works with autoscaling/v2beta2

@jaronoff97
Copy link
Contributor

ah i see, so anyone on k8s <= 1.25 and wants to use the old versions will have to use an older version of the operator. And from now on we will no longer support both?

@JorTurFer
Copy link
Contributor

If we remove the support for autoscaling/v2beta2, it means that the operator will require at least k8s >= v1.23. Cluster below that version wouldn't be able to upgrade the operator because they don't have support for autoscaling/v2 (and the same for PodDisrultionBudget and k8s v1.21).

If this is something acceptable or not, is something that you have to discuss (IMHO, I don't have anything to say about it xD).
I'd do it because k8 v1.22.17 was deprecated on 08 Dec 2022, and it reduces the boilerplate of supporting deprecated k8s versions

@jaronoff97
Copy link
Contributor

yeah, i would be in favor of removing support for the old versions in favor of only the new ones, simplifies our code massively.

@JorTurFer
Copy link
Contributor

I can tackle this as part of the PR I have on going if you agree with dropping the support for autoscaling/v2beta2. Should I do it?

@pavolloffay
Copy link
Member Author

We don't need to support both until 1.25, I mean, in k8s 1.23 policy/v1 and autoscaling/v2 are available, so we can query the api server using them even if there are still the other apis available yet. Internally, the API server will translate the apis, but that's something internal inside the api server

+1. We can simply use the new versions. Which version is used by the operator is an implementation detail. Important is to offer the same features that are available in our CR.

@pavolloffay
Copy link
Member Author

pavolloffay commented Sep 25, 2023

I can tackle this as part of the PR I have on going

I prefer a separate PR for dropping older versions and the cleanup that comes with it.

@JorTurFer
Copy link
Contributor

Okey,
What about limiting the pdb PR to v1 and dropping autoscaling/v2beta2 in other PR split?

@JorTurFer
Copy link
Contributor

Or merging current approach and dropping the support in other PR. I'd like to reduce the extra work

@JorTurFer
Copy link
Contributor

JorTurFer commented Sep 26, 2023

I have opened a PR getting rid of autoscaling/v2beta2. Once it's merged, I'll update my PDB's PR

@jaronoff97
Copy link
Contributor

Awesome, thank you! I commented back there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants