Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v1.31] fix(hetzner): deprecated server type will break on 2024-09-06 #7298

Conversation

apricote
Copy link
Member

What type of PR is this?

/kind bug
/kind cleanup

What this PR does / why we need it:

The cx11 server type was deprecated on 2024-06-06 and will be removed from the API on 2024-09-06. Once it is removed, the cluster-autoscaler provider hetzner will not start anymore with the following error message:

Failed to get node infos for groups: failed to create resource list for node group draining-node-pool error: failed to get machine type cx11 info error: server type not found

As the node pool draining-node-pool is not being used anywhere, this commit removes it and the hard coded reference to the deprecated server type.

Backport of #7211 to 1.31 branch.

Which issue(s) this PR

Related to #7210

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Fix Hetzner Provider not starting after 2024-09-07

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

- [Hetzner Cloud Changelog](https://docs.hetzner.cloud/changelog#2024-06-06-old-server-types-with-shared-intel-vcpus-are-deprecated)

…type

The `cx11` server type was deprecated on 2024-06-06 and will be removed
from the API on 2024-09-06. Once it is removed, the cluster-autoscaler
provider hetzner will not start anymore with the following error
message:

    Failed to get node infos for groups: failed to create resource list for node group draining-node-pool error: failed to get machine type cx11 info error: server type not found

As the node pool `draining-node-pool` is not being used anywhere, this
commit removes it and the hard coded reference to the deprecated server
type.
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. area/cluster-autoscaler labels Sep 23, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: apricote

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added area/provider/hetzner Issues or PRs related to Hetzner provider approved Indicates a PR has been approved by an approver from all required OWNERS files. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Sep 23, 2024
@Shubham82
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 23, 2024
@k8s-ci-robot k8s-ci-robot merged commit eb7eb3d into kubernetes:cluster-autoscaler-release-1.31 Sep 23, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/cluster-autoscaler area/provider/hetzner Issues or PRs related to Hetzner provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants