Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not clear when/how to configure more than 3 masters #8769

Closed
schollii opened this issue Mar 18, 2020 · 8 comments · Fixed by #9387
Closed

Not clear when/how to configure more than 3 masters #8769

schollii opened this issue Mar 18, 2020 · 8 comments · Fixed by #9387
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@schollii
Copy link

1. What kops version are you running? The command kops version, will display
this information.

1.13.0

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

1.13.10

3. What cloud provider are you using?

AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

  1. Create a cluster with 3 master nodes
  2. Decide to increase the number of master nodes
  3. Find the page https://github.com/kubernetes/kops/blob/master/docs/operations/high_availability.md
  4. Determine you're not sure: if you use 5 master nodes, should you use 5 zones, or is 3 zones ok? The docs seem to say that more than one master in a zone is actually less HA, but not sure why. What if you want 11 master nodes in a region that has only 5 zones, you'll have 4 zones with 2 nodes each, and one zone with 3 nodes.

Basically what's not clear is, does it even make sense to have # masters > # zones? If it does make sense, are there rules about the number of zones vs masters. Eg if you have 6 masters, in 3 zones, an one zone fails, you're down to 4 masters; why is that worse than having a 5 master cluster in 3 zones, and the one-master zone fails, then you're also down to 4. This needs to be explained better.

It seems very likely that as time goes, your requirements are going to increase in services and hence number of pods and hence demand on master nodes will increase, and I imagine at some point you will notice the CPU on masters become "too high too often". At that point, you will need to know the above.

5. What happened after the commands executed?

n/a

6. What did you expect to happen?

Have a clear idea of whether increasing master node count from 3 to 5 required that I also increase the # of zones, or if I could stay at 3 zones, and overall I was expecting to have a better understanding of when and how to scale the control plane of a kops-based kubernetes cluster.

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

Probably n/a but let me know if applicable

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

Probably not applicable

9. Anything else do we need to know?

Not for now

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 16, 2020
@olemarkus
Copy link
Member

Kops won't let you run more than 1 master in a single AZ as far as I know. Any production cluster should definitely run 3 masters. 5 means even more HA (you can live with two AZs unavailable, something (almost) never happens).

The best practice note there looks silly. Not sure if AWS even have regions with only 2 AZs anymore.

@schollii
Copy link
Author

Kops let me create 5 masters in 3 zones.

@olemarkus
Copy link
Member

I see that you are running a fairly old kops version so it could be that this has changed now. I haven't checked though.
There are strong reasons for running 3 masters over 1, but running 5 masters seems overkill. If you need to scale, you probably want to use more performant instance types instead.

@olemarkus
Copy link
Member

If you use kops 1.17, changing the number of masters should be quite easy anyway.

@olemarkus
Copy link
Member

Revised the HA docs here #9387 have a look and see if it makes more sense.

@schollii
Copy link
Author

@olemarkus thanks for the notice, I had a look and the changes are helpful for sure. Can you add something about changing the number of masters, or is this in a separate document? When could it makes sense to have more than 3 masters, and in that case should they be spread over equal number of zones?

@olemarkus
Copy link
Member

You have seen the docs on going from 1 to 3 masters? Procedure is the same.
I'd say running more than 3 masters would be quite rare. I have never considered it for any of our setups. Usually you have more failures related to AZs than to masters themselves, so if you ever should go for 5 nodes, definitely put them in separate AZs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants