Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CA: Hetzner cloud-init example may need to be updated for new community packages #6106

Closed
dominic-p opened this issue Sep 13, 2023 · 11 comments
Labels
area/cluster-autoscaler area/provider/hetzner Issues or PRs related to Hetzner provider kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@dominic-p
Copy link
Contributor

Which component are you using?: Cluster Autoscaler with Hetzner

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:

Ever since updating to the new community hosted APT repo, I've been experience network issues with my clusters. It appears that the issue has do with the KUBELET_EXTRA_ARGS env variable (see this comment). I was previously configuring this variable as shown in the example cloud-init script here.

Describe the solution you'd like.:

I'm not exactly sure why the CA needs this env variable set for Hetzner. In my testing, it didn't seem to be affected like the CCM was. But, I thought I would bring up the potential issue anyway.

I'm currently working around the problem by overwriting the new file that the deb package introduces, but that's not exactly the most elegant solution.

@dominic-p dominic-p added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 13, 2023
@apricote
Copy link
Member

I'm not exactly sure why the CA needs this env variable set for Hetzner. In my testing, it didn't seem to be affected like the CCM was. But, I thought I would bring up the potential issue anyway.

I dont think this variable is specifically required for cluster-autoscaler, more that it makes sense to always deploy hcloud-cloud-controller-manager in tandem with CA.

The cloud-init example is really outdated, if anyone wants to update it I am more than happy to review/help.

@apricote
Copy link
Member

/area provider/hetzner

@k8s-ci-robot k8s-ci-robot added the area/provider/hetzner Issues or PRs related to Hetzner provider label Oct 20, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 30, 2024
@dominic-p
Copy link
Contributor Author

I don't think this one is stale. I haven't had a chance to look into updating the example cloud-init myself, but it would be helpful for future users.

@apricote
Copy link
Member

apricote commented Feb 5, 2024

/remove-lifecycle stale

If anyone wants to work on this, we made these changes to the hcloud-cloud-controller-manager documentation regarding KUBELET_EXTRA_ARGS: https://github.com/hetznercloud/hcloud-cloud-controller-manager/pull/607/files

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 5, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 5, 2024
@dominic-p
Copy link
Contributor Author

Unfortunately, I haven't had a chance to update the docs on this yet, but I still don't think it's stale.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 5, 2024
@dominic-p
Copy link
Contributor Author

I still think a doc update might be appropriate here, but I haven't had a chance to tackle it myself.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 5, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler area/provider/hetzner Issues or PRs related to Hetzner provider kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants