Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to run k8s autoscaler on our own cluster? #953

Closed
tjliupeng opened this issue Jun 12, 2018 · 6 comments
Closed

Is it possible to run k8s autoscaler on our own cluster? #953

tjliupeng opened this issue Jun 12, 2018 · 6 comments

Comments

@tjliupeng
Copy link

Hi, guys,

According to the autoscaler document, it should be run on GCE, GKE, Azure or AWS cluster. I am just wondering whether it can run on our own k8s cluster in our private cloud.

Thanks!

@kgolab
Copy link
Collaborator

kgolab commented Jun 12, 2018

Hi @tjliupeng,

I guess you'd need to provide your own implementation of the CloudProvider interface as I suspect that your private cloud does match any of the existing implementations.

@tjliupeng
Copy link
Author

tjliupeng commented Jun 12, 2018

@kgolab , what do you mean "provide your own implementation of the CloudProvider interface"?

@aleksandra-malinowska
Copy link
Contributor

Cluster Autoscaler adds or removes nodes from the cluster by creating or deleting VMs. To separate the autoscaling logic (the same for all clouds) from the API calls required to execute it (different for each cloud), the latter are hidden behind an interface. Each supported cloud has its own implementation of it, and --cloud-provider flag determines which one will be used. All code related to this can be found here: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider

To add support for your private cloud:

  • Write a client for your private cloud in Go, implementing CloudProvider interface (optional methods can return e.g. an error with message "not implemented").
  • Add constructing it to builder.
  • Build a custom image of Cluster Autoscaler that includes those changes and configure it to start with your cloud provider. We can't accept code related to your custom solution, unless it's publicly available (e.g. Openstack), so you'll have to make your own releases.

There are a couple things to consider before you even get started:

  • Abstractions used by Cluster Autoscaler assume nodes belong to "node groups". All node within a group must be of the same machine type (have the same amount of resources), have the same set of labels and taints, and be located in the same availability zone. This doesn't mean your private cloud has to have a concept of such node groups, but it helps.
  • There must be a way to delete a specific node. If your cloud supports instance groups, and you're only able to provide a method to decrease the size of a given group, without guaranteeing which instance will be killed, it won't work well.
  • There must be a way to match a Kubernetes node to an instance it's running on. This is usually done by kubelet setting node's ProviderId field to an instance id which can be used in API calls to cloud.

@tjliupeng
Copy link
Author

Thanks for the detail explanation, @aleksandra-malinowska

@balleon
Copy link

balleon commented Dec 24, 2020

@aleksandra-malinowska
I am looking to use Autoscaler with private cloud like OpenStack (not OpenStack Magnum).
Is node groups concept mandatory with custom provider?

@MaciekPytel
Copy link
Contributor

MaciekPytel commented Jan 5, 2021

It is. All that CA is doing to scale-up is pick the best node group to scale-up and change its desired size. It expects the node group will be able to start the VM and configure it so that the VM will be able to join the cluster (ie. all CA does is make the decision to resize a particular node group, it doesn't actually create nodes).

yaroslava-serdiuk pushed a commit to yaroslava-serdiuk/autoscaler that referenced this issue Feb 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants