Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add comment clarifying network allocation and sizes #6607

Merged
merged 2 commits into from
Sep 10, 2020
Merged

Add comment clarifying network allocation and sizes #6607

merged 2 commits into from
Sep 10, 2020

Conversation

mikejoh
Copy link
Contributor

@mikejoh mikejoh commented Sep 1, 2020

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind api-change
/kind bug
/kind cleanup
/kind design
/kind documentation
/kind failing-test
/kind feature
/kind flake

What this PR does / why we need it:
In the inventory/sample/groups_vars/k8s-cluster/k8s-cluster.yml there's a typo that may lead to confusion regarding the kube_network_node_prefix and how it relates to the kube_pods_subnet. This was fixed as seen here:

# internal network node size allocation (optional). This is the size allocated
# to each node on your network. With these defaults you should have
# room for 64 nodes with 254 pods per node.
# Example: Up to 256 nodes, 100 pods per node (/16 network):
# - kube_service_addresses: 10.233.0.0/17
# - kube_pods_subnet: 10.233.128.0/17
# - kube_network_node_prefix: 25
# Example: Up to 4096 nodes, 100 pods per node (/12 network):
# - kube_service_addresses: 10.192.0.0/13
# - kube_pods_subnet: 10.200.0.0/13
# - kube_network_node_prefix: 25

This PR adds the comment from defaults/main.yml to the sample k8s-cluster.yml file.

Which issue(s) this PR fixes:

N/A

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

NONE

@k8s-ci-robot k8s-ci-robot added kind/documentation Categorizes issue or PR as related to documentation. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 1, 2020
@k8s-ci-robot
Copy link
Contributor

Hi @mikejoh. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Sep 1, 2020
@floryut
Copy link
Member

floryut commented Sep 1, 2020

/ok-to-test
/lgtm

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 1, 2020
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 1, 2020
@EppO
Copy link
Contributor

EppO commented Sep 3, 2020

I'm really confused by the docs.

This is the size allocated to each node on your network.

for pod allocation! It's used for the controller-manager to allocate per-node CIDRs from the cluster CIDR for pod networking (when --allocate-node-cidrs=true).

Example: Up to 256 nodes, 100 pods per node (/16 network)
Example: Up to 4096 nodes, 100 pods per node (/12 network)

max pods per node depends on the Pod subnet size and kubelet_max_pods (which is 110 as default today), but It's not 100 in both cases.

@mikejoh
Copy link
Contributor Author

mikejoh commented Sep 3, 2020

I'm really confused by the docs.

This is the size allocated to each node on your network.

for pod allocation! It's used for the controller-manager to allocate per-node CIDRs from the cluster CIDR for pod networking (when --allocate-node-cidrs=true).

Example: Up to 256 nodes, 100 pods per node (/16 network)
Example: Up to 4096 nodes, 100 pods per node (/12 network)

max pods per node depends on the Pod subnet size and kubelet_max_pods (which is 110 as default today), but It's not 100 in both cases.

Thansk for the feedback!

You're right, it's confusing when reading it again and given that kube_max_pods isn't even considered (or rather mentioned) doesn't make it any clearer. I also think it's interesting to dig into the hard limit of 110 Pods per node, here's some insight into why that is.

Not sure how to rewrite this at the moment, i mean tuning the kube_max_pods to maximize the number of addresses in e.g. a /24 might still not be what we want to do, or recommend without a disclaimer. Maybe it's better to add a clarifying doc or point to a official one over at kubernetes.io.

Also you mentioned the --allocate-node-cidrs option, which i believe also is in-scope here. To maybe add something about, but in the context of kubespray that might be a corner case. Everything configured today, as defaults, more or less assumes you have a cloud somewhere and that you want to use one of the network CNI plugins.

@EppO
Copy link
Contributor

EppO commented Sep 3, 2020

I think it would be more precise to say "up to x or kubelet_max_pods, whatever the smaller is" or something more constructed. kubelet_max_pods is configurable today in kubespray so you may change that depending on your horsepower.

@mikejoh
Copy link
Contributor Author

mikejoh commented Sep 4, 2020

I think it would be more precise to say "up to x or kubelet_max_pods, whatever the smaller is" or something more constructed. kubelet_max_pods is configurable today in kubespray so you may change that depending on your horsepower.

Right, so maybe something like this:

# internal network node size allocation (optional). This is the size allocated
# to each node for pod IP address allocation. Note that the number of pods per node is
# also limited by the kubelet_max_pods variable which defaults to 110.
#
# Example: 
# Up to 64 nodes and up to 254 or kubelet_max_pods (the lowest of the two) pods per node:
#  - kube_pods_subnet: 10.233.64.0/18
#  - kube_network_node_prefix: 24
#  - kubelet_max_pods: 110
#
# Example:
# Up to 128 nodes and up to 126 or kubelet_max_pods (the lowest of the two) pods per node:
#  - kube_pods_subnet: 10.233.64.0/18
#  - kube_network_node_prefix: 25
#  - kubelet_max_pods: 110

@EppO
Copy link
Contributor

EppO commented Sep 4, 2020

I think it would be more precise to say "up to x or kubelet_max_pods, whatever the smaller is" or something more constructed. kubelet_max_pods is configurable today in kubespray so you may change that depending on your horsepower.

Right, so maybe something like this:

# internal network node size allocation (optional). This is the size allocated
# to each node for pod IP address allocation. Note that the number of pods per node is
# also limited by the kubelet_max_pods variable which defaults to 110.
#
# Example: 
# Up to 64 nodes and up to 254 or kubelet_max_pods (the lowest of the two) pods per node:
#  - kube_pods_subnet: 10.233.64.0/18
#  - kube_network_node_prefix: 24
#  - kubelet_max_pods: 110
#
# Example:
# Up to 128 nodes and up to 126 or kubelet_max_pods (the lowest of the two) pods per node:
#  - kube_pods_subnet: 10.233.64.0/18
#  - kube_network_node_prefix: 25
#  - kubelet_max_pods: 110

way better, thanks a lot!

…ng network allocation and sizes

Signed-off-by: Mikael Johansson <[email protected]>
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed lgtm "Looks good to me", indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Sep 5, 2020
@mikejoh
Copy link
Contributor Author

mikejoh commented Sep 5, 2020

Note that i've also changed the comment in the roles/kubespray-defaults/defaults/main.yaml file.

@woopstar
Copy link
Member

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 10, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mikejoh, woopstar

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 10, 2020
@k8s-ci-robot k8s-ci-robot merged commit 040dda3 into kubernetes-sigs:master Sep 10, 2020
erulabs added a commit to kubesail/kubespray that referenced this pull request Sep 12, 2020
* 'master' of https://github.com/kubernetes-sigs/kubespray: (32 commits)
  Update api version, deprecated in 1.19 (kubernetes-sigs#6656)
  Update etcd to 3.4.13 (kubernetes-sigs#6658)
  Update dockerfile for v1.19.1 (kubernetes-sigs#6668)
  yamllint: ignore .git dir (kubernetes-sigs#6667)
  fix kubelet_flexvolumes_plugins_dir undefined (kubernetes-sigs#6645)
  Remove deprecated (and removed in 1.19) flag and function --basic-auth-file (kubernetes-sigs#6655)
  Update CoreDNS to 1.7.0 (kubernetes-sigs#6657)
  Update various dependencies following 1.19 release (kubernetes-sigs#6660)
  Add Kubernetes 1.19.1 hashes and set default (kubernetes-sigs#6654)
  crio: use system default for storage driver by default (kubernetes-sigs#6637)
  Add iptables_backend to weave options (kubernetes-sigs#6639)
  Add comment clarifying network allocation and sizes (kubernetes-sigs#6607)
  Allowing resource management of metrics-server container.  Will allow fine-tuning of resource allocation and solving throttling issues. Setting defaults as per the current request & limit allocation: cpu: 43m, memory 55Mi for both limits & requests. (kubernetes-sigs#6652)
  Fix a bunch of failed quality rules (kubernetes-sigs#6646)
  Update calico to 3.16.1 (kubernetes-sigs#6644)
  NetworkManager lists must be separated by , (kubernetes-sigs#6643)
  Set ansible_python_interpreter to python3 on debian (fix error with mitogen) (kubernetes-sigs#6633)
  Use v2.14.0 as base image for CI (kubernetes-sigs#6636)
  Cleanup v1.16 hashes (kubernetes-sigs#6635)
  Update kube_version_min_required for 2.14 release (kubernetes-sigs#6634)
  ...
LuckySB pushed a commit to southbridgeio/kubespray that referenced this pull request Jan 16, 2021
…6607)

* Add comment from roles/kubespray-defaults/defaults/main.yaml clarifying network allocation and sizes

Signed-off-by: Mikael Johansson <[email protected]>

* Rewrite of the comment and added new examples

Signed-off-by: Mikael Johansson <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/documentation Categorizes issue or PR as related to documentation. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants