Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add description of replica controller scaledown sort logic #26993

Merged
merged 1 commit into from
Mar 22, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions content/en/docs/concepts/workloads/controllers/replicaset.md
Original file line number Diff line number Diff line change
Expand Up @@ -310,6 +310,17 @@ assuming that the number of replicas is not also changed).
A ReplicaSet can be easily scaled up or down by simply updating the `.spec.replicas` field. The ReplicaSet controller
ensures that a desired number of Pods with a matching label selector are available and operational.

When scaling down, the ReplicaSet controller chooses which pods to delete by sorting the available pods to
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would argue that this is too detailed to be useful to users. Also we should avoid locking the criteria in a way that users start relying on it, making it harder to change in the future. In particular, we should state that these are preferences and not something that is guaranteed.

Important downscale selection criteria IMO:

  1. Pending (including unscheduled) pods.
  2. pod-deletion-cost annotation (probably worth stating the default)
  3. too many pods in a node (this is the rank, although not sure if we should include it in the documentation).
  4. lower running time (bucketed)

And if they all match, selection is random (the UIDs is 100% an implementation detail that shouldn't go in the documentation).

prioritize scaling down pods based on the following general algorithm:
1. Pending (and unschedulable) pods are scaled down first
2. If controller.kubernetes.io/pod-deletion-cost annotation is set, then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also list controller.kubernetes.io/pod-deletion-cost in https://kubernetes.io/docs/reference/labels-annotations-taints/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add that in #26739

@alculquicondor we should do that for the indexed job annotation too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened #27106

the pod with the lower value will come first.
3. Pods on nodes with more replicas come before pods on nodes with fewer replicas.
4. If the pods' creation times differ, the pod that was created more recently
comes before the older pod (the creation times are bucketed on an integer log scale)

If all of the above match, then selection is random.

### ReplicaSet as a Horizontal Pod Autoscaler Target

A ReplicaSet can also be a target for
Expand Down