Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replica behavior without an HPA #360

Closed
jtolsma opened this issue Nov 5, 2019 · 3 comments · Fixed by #363
Closed

Replica behavior without an HPA #360

jtolsma opened this issue Nov 5, 2019 · 3 comments · Fixed by #363
Labels
question Further information is requested

Comments

@jtolsma
Copy link

jtolsma commented Nov 5, 2019

The documentation states that the autoscalerRef section is optional, and we have a few deployments that do not use HPA's. However, I noticed that the canaries with these HPA less Deployments only ever scale to one pod during testing/rollout, etc... The original deployment object has replicas of 8, and the -primary deployment also has 8 after canary creation. But when flagger takes over it drops the replicas to 1 on the original deployment during the whole rollout test. So 50% traffic swing is impractical. If I test with an HPA that has min/max set to the static deployment replicas, it works, though it still spins up 8, then goes down to 1, then back to 8 before rollout.

What is the expected behavior with no autoscaleRef setup? Am I missing something? The following code seems to be where the static replica of 1 is coming from and I don't see where flagger would scale up the replicas based on the deployment replicas?

if shouldAdvance {
c.recordEventInfof(cd, "New revision detected! Scaling up %s.%s", cd.Spec.TargetRef.Name, cd.Namespace)
c.sendNotification(cd, "New revision detected, starting canary analysis.",
true, false)
if err := c.deployer.Scale(cd, 1); err != nil {
c.recordEventErrorf(cd, "%v", err)
return false
}

@stefanprodan
Copy link
Member

Yes I should've mention that a HPA with an equal number of min/max replicas should be used when your app needs more than one replica. In the next canary version I'll make the HPA a requirement.

@tanordheim
Copy link
Contributor

Sorry to bump a closed issue, but is making HPA a requirement an absolute necessity here? We just transitioned from using HPAs to VPAs for our deployments as an experiment, and VPAs and HPAs should not be used together (https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#limitations-of-beta-version). If this change is made we basically have to revert this change to be able to continue using Flagger or stay on an old version of Flagger.

@stefanprodan
Copy link
Member

@tanordheim the PR that fixed this issue makes Flagger work without a HPA. If you have the replica field in your deployment spec then Flagger will use that instead of relying on HPA to scale up the canary. I haven’t used VPA but I think it will conflict with Flagger since it will change the deployment spec and each time it does that Flagger will restart the analysis. Please test it out and open an issue if that’s the case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants