Skip to content

Commit

Permalink
Get capi targetsize from cache
Browse files Browse the repository at this point in the history
This ensured that access to replicas during scale down operations were never stale by accessing the API server kubernetes#3104.
This honoured that behaviour while moving to unstructured client kubernetes#3312.
This regressed that behaviour while trying to reduce the API server load kubernetes#4443.
This put back the never stale replicas behaviour at the cost of loading back the API server kubernetes#4634.

This PR tries to satisfy both non stale replicas during scale down and prevent the API server from being overloaded. To achieve that it lets targetSize which is called on every autoscaling cluster state loop from come from cache.

Also note that the scale down implementation has changed https://github.com/kubernetes/autoscaler/commits/master/cluster-autoscaler/core/scaledown.
  • Loading branch information
enxebre committed Jul 13, 2022
1 parent db5e2f2 commit d7eb79a
Showing 1 changed file with 9 additions and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ package clusterapi

import (
"fmt"
"github.com/pkg/errors"

corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
Expand Down Expand Up @@ -52,7 +53,14 @@ func (ng *nodegroup) MaxSize() int {
// (new nodes finish startup and registration or removed nodes are
// deleted completely). Implementation required.
func (ng *nodegroup) TargetSize() (int, error) {
return ng.scalableResource.Replicas()
replicas, found, err := unstructured.NestedInt64(ng.scalableResource.unstructured.Object, "spec", "replicas")
if err != nil {
return 0, errors.Wrap(err, "error getting replica count")
}
if !found {
replicas = 0
}
return int(replicas), nil
}

// IncreaseSize increases the size of the node group. To delete a node
Expand Down

0 comments on commit d7eb79a

Please sign in to comment.