You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The documentation states that the autoscalerRef section is optional, and we have a few deployments that do not use HPA's. However, I noticed that the canaries with these HPA less Deployments only ever scale to one pod during testing/rollout, etc... The original deployment object has replicas of 8, and the -primary deployment also has 8 after canary creation. But when flagger takes over it drops the replicas to 1 on the original deployment during the whole rollout test. So 50% traffic swing is impractical. If I test with an HPA that has min/max set to the static deployment replicas, it works, though it still spins up 8, then goes down to 1, then back to 8 before rollout.
What is the expected behavior with no autoscaleRef setup? Am I missing something? The following code seems to be where the static replica of 1 is coming from and I don't see where flagger would scale up the replicas based on the deployment replicas?
if shouldAdvance {
c.recordEventInfof(cd, "New revision detected! Scaling up %s.%s", cd.Spec.TargetRef.Name, cd.Namespace)
c.sendNotification(cd, "New revision detected, starting canary analysis.",
true, false)
if err := c.deployer.Scale(cd, 1); err != nil {
c.recordEventErrorf(cd, "%v", err)
return false
}
The text was updated successfully, but these errors were encountered:
Yes I should've mention that a HPA with an equal number of min/max replicas should be used when your app needs more than one replica. In the next canary version I'll make the HPA a requirement.
Sorry to bump a closed issue, but is making HPA a requirement an absolute necessity here? We just transitioned from using HPAs to VPAs for our deployments as an experiment, and VPAs and HPAs should not be used together (https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#limitations-of-beta-version). If this change is made we basically have to revert this change to be able to continue using Flagger or stay on an old version of Flagger.
@tanordheim the PR that fixed this issue makes Flagger work without a HPA. If you have the replica field in your deployment spec then Flagger will use that instead of relying on HPA to scale up the canary. I haven’t used VPA but I think it will conflict with Flagger since it will change the deployment spec and each time it does that Flagger will restart the analysis. Please test it out and open an issue if that’s the case.
The documentation states that the autoscalerRef section is optional, and we have a few deployments that do not use HPA's. However, I noticed that the canaries with these HPA less Deployments only ever scale to one pod during testing/rollout, etc... The original deployment object has replicas of 8, and the -primary deployment also has 8 after canary creation. But when flagger takes over it drops the replicas to 1 on the original deployment during the whole rollout test. So 50% traffic swing is impractical. If I test with an HPA that has min/max set to the static deployment replicas, it works, though it still spins up 8, then goes down to 1, then back to 8 before rollout.
What is the expected behavior with no autoscaleRef setup? Am I missing something? The following code seems to be where the static replica of 1 is coming from and I don't see where flagger would scale up the replicas based on the deployment replicas?
if shouldAdvance {
c.recordEventInfof(cd, "New revision detected! Scaling up %s.%s", cd.Spec.TargetRef.Name, cd.Namespace)
c.sendNotification(cd, "New revision detected, starting canary analysis.",
true, false)
if err := c.deployer.Scale(cd, 1); err != nil {
c.recordEventErrorf(cd, "%v", err)
return false
}
The text was updated successfully, but these errors were encountered: