-
Notifications
You must be signed in to change notification settings - Fork 904
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix spreadconstraints[i].MaxGroups Invalidation when scaleup replicas #1324
fix spreadconstraints[i].MaxGroups Invalidation when scaleup replicas #1324
Conversation
1e2930a
to
e7fceb1
Compare
/cc @Garrybest |
Thanks @huone1, I will check it later. |
/assign |
This PR generally LGTM. But I'm a little curious about how does the Taken #1323 as an example. We divided 56 replicas into How does the /cc @mrlihanbo |
the PR #1334 add a new score plugin clusterLocality to favors cluster that already have requested and supply a preferred policy to choose the available clusters |
Got it, nice work👍 |
pkg/scheduler/core/util.go
Outdated
break | ||
} | ||
} | ||
} | ||
return res | ||
|
||
return res, validTarget |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The res
could be calculated by validTarget
, I suggest remove it.
diff --git a/pkg/scheduler/core/division_algorithm.go b/pkg/scheduler/core/division_algorithm.go
index e0b2c082..689b8231 100644
--- a/pkg/scheduler/core/division_algorithm.go
+++ b/pkg/scheduler/core/division_algorithm.go
@@ -210,7 +210,7 @@ func scaleUpScheduleByReplicaDivisionPreference(
preference policyv1alpha1.ReplicaDivisionPreference,
) ([]workv1alpha2.TargetCluster, error) {
// Step 1: Find the clusters that have old replicas, so we can prefer to assign new replicas towards them.
- scheduledClusterNames, scheduledClusters := findOutScheduledCluster(spec.Clusters, clusters)
+ scheduledClusters := findOutScheduledCluster(spec.Clusters, clusters)
// Step 2: calculate the assigned Replicas in scheduledClusters
assignedReplicas := util.GetSumOfReplicas(scheduledClusters)
@@ -229,7 +229,7 @@ func scaleUpScheduleByReplicaDivisionPreference(
// If not, the old replicas may be recreated which is not expected during scaling up.
// The parameter `scheduledClusterNames` is used to make sure that we assign new replicas to them preferentially
// so that all the replicas are aggregated.
- result, err := divideReplicasByPreference(clusterAvailableReplicas, newSpec.Replicas, preference, scheduledClusterNames)
+ result, err := divideReplicasByPreference(clusterAvailableReplicas, newSpec.Replicas, preference, util.ConvertToClusterNames(scheduledClusters))
if err != nil {
return result, err
}
diff --git a/pkg/scheduler/core/util.go b/pkg/scheduler/core/util.go
index 1ceab827..93e2b8e4 100644
--- a/pkg/scheduler/core/util.go
+++ b/pkg/scheduler/core/util.go
@@ -77,11 +77,10 @@ func calAvailableReplicas(clusters []*clusterv1alpha1.Cluster, spec *workv1alpha
// findOutScheduledCluster will return a name set of clusters
// which are a part of `feasibleClusters` and have non-zero replicas.
-func findOutScheduledCluster(tcs []workv1alpha2.TargetCluster, candidates []*clusterv1alpha1.Cluster) (sets.String, []workv1alpha2.TargetCluster) {
+func findOutScheduledCluster(tcs []workv1alpha2.TargetCluster, candidates []*clusterv1alpha1.Cluster) []workv1alpha2.TargetCluster {
validTarget := make([]workv1alpha2.TargetCluster, 0)
- res := sets.NewString()
if len(tcs) == 0 {
- return res, validTarget
+ return validTarget
}
for _, targetCluster := range tcs {
@@ -92,14 +91,13 @@ func findOutScheduledCluster(tcs []workv1alpha2.TargetCluster, candidates []*clu
// must in `candidates`
for _, cluster := range candidates {
if targetCluster.Name == cluster.Name {
- res.Insert(targetCluster.Name)
validTarget = append(validTarget, targetCluster)
break
}
}
}
- return res, validTarget
+ return validTarget
}
// resortClusterList is used to make sure scheduledClusterNames are in front of the other clusters in the list of
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is better, and I have fixed it
e7fceb1
to
72a91ba
Compare
pkg/scheduler/core/util.go
Outdated
@@ -77,11 +77,12 @@ func calAvailableReplicas(clusters []*clusterv1alpha1.Cluster, spec *workv1alpha | |||
|
|||
// findOutScheduledCluster will return a name set of clusters |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comments should be updated as well. Sorry for missing it from my last comment.
72a91ba
to
9aa1349
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: RainbowMango The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: huone1 <[email protected]>
9aa1349
to
75aa4ce
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Signed-off-by: huone1 [email protected]
What type of PR is this?
/kind bug
What this PR does / why we need it:
because the spreadconstraints[i].maxgroup define the max cluster number, So the target cluster number could not be greater than it in any scenario.
The difference between the current cluster and the scheduled cluster is not considered when calculating the scheduled replicas
Which issue(s) this PR fixes:
Fixes #1323
let's test it According to the issue #1323 operation
before scaleup
after scaleup
Special notes for your reviewer:
NONE
Does this PR introduce a user-facing change?: