-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doc: KIND complex network scenarios #1337
Closed
Closed
Changes from all commits
Commits
Show all changes
8 commits
Select commit
Hold shift + click to select a range
1b56f9b
Doc: KIND complex network scenarios
aojea 93292fd
Add services section
aojea f4b683c
Fix types and address reviews
aojea bcd5549
doc multihomed nodes
aojea fbcd6e3
Configure kubernetes to use loopback addresses
aojea 763f698
automte multicluster network setup
aojea 7defa85
Address comments
aojea 622f161
Use shorter title
aojea File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,229 @@ | ||
--- | ||
title: "Networking Scenarios" | ||
menu: | ||
main: | ||
parent: "user" | ||
identifier: "networking-scenarios" | ||
weight: 3 | ||
--- | ||
# Networking scenarios [Linux Only] | ||
|
||
KIND runs Kubernetes cluster in Docker, and leverages Docker networking for all the network features: port mapping, IPv6, containers connectivity, etc. | ||
|
||
## Docker Networking | ||
|
||
<img src="/docs/user/images/kind-docker-network.png"/> | ||
|
||
KIND uses [the default docker bridge network](https://docs.docker.com/network/bridge/#use-the-default-bridge-network). | ||
|
||
It creates a bridge named **docker0** | ||
|
||
{{< codeFromInline lang="bash" >}} | ||
$ docker network ls | ||
NETWORK ID NAME DRIVER SCOPE | ||
8fb3fa672192 bridge bridge local | ||
0c8d84f52592 host host local | ||
558684a8afb8 none null local | ||
{{< /codeFromInline >}} | ||
|
||
with IP address 172.17.0.1/16. | ||
|
||
{{< codeFromInline lang="bash" >}} | ||
$ ip addr show docker0 | ||
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default | ||
link/ether 02:42:83:eb:5e:67 brd ff:ff:ff:ff:ff:ff | ||
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 | ||
valid_lft forever preferred_lft forever | ||
{{< /codeFromInline >}} | ||
|
||
Docker also creates iptables NAT rules on the Docker host that masquerade the traffic from the containers connected to docker0 bridge to connect to the outside world. | ||
|
||
## Kubernetes Networking | ||
|
||
<img src="/docs/user/images/kind-kubernetes-network-kindnet.png"/> | ||
|
||
[The Kubernetes network model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) implies end to end connectivity without NAT between Pods. | ||
|
||
By default, KIND uses its own CNI plugin, **Kindnet**, that install the corresponding routes and iptables rules on the cluster nodes. | ||
|
||
## Services | ||
|
||
[Kubernetes Services](https://kubernetes.io/docs/concepts/services-networking/service/) are an abstract way to expose an application running on a set of Pods as a network service. | ||
|
||
The are different types of Services: | ||
|
||
* Cluster IP | ||
* NodePort | ||
* LoadBalancer | ||
* Headless | ||
* ExternalName | ||
|
||
In Linux hosts, you can access directly the Cluster IP address of the services just adding one route to the configured **serviceSubnet** parameters via any of the nodes that belong to the cluster, so there is no need to use NodePort or LoadBalancer services. | ||
|
||
## Multiple clusters | ||
|
||
As we explained before, all KIND clusters are sharing the same Docker network, that means that all the cluster nodes have direct connectivity. | ||
|
||
If we want to spawn multiple clusters and provide Pod to Pod connectivity between different clusters, first we have to configure the cluster networking parameters to avoid address overlapping. | ||
|
||
### Example: Kubernetes multi-region | ||
|
||
Let's take an example emulating 2 clusters: A and B. | ||
|
||
For cluster A we are going to use the following network parameters: | ||
|
||
{{< codeFromInline lang="bash" >}} | ||
cat <<EOF | kind create cluster --name clusterA --config=- | ||
kind: Cluster | ||
apiVersion: kind.x-k8s.io/v1alpha4 | ||
networking: | ||
podSubnet: "10.110.0.0/16" | ||
serviceSubnet: "10.115.0.0/16" | ||
nodes: | ||
- role: control-plane | ||
- role: worker | ||
EOF | ||
{{< /codeFromInline >}} | ||
|
||
And Cluster B: | ||
|
||
{{< codeFromInline lang="bash" >}} | ||
cat <<EOF | kind create cluster --name clusterB --config=- | ||
kind: Cluster | ||
apiVersion: kind.x-k8s.io/v1alpha4 | ||
networking: | ||
podSubnet: "10.220.0.0/16" | ||
serviceSubnet: "10.225.0.0/16" | ||
nodes: | ||
- role: control-plane | ||
- role: worker | ||
EOF | ||
{{< /codeFromInline >}} | ||
|
||
All the nodes in each cluster will have routes to the podsSubnets assigned to the nodes of the same cluster. | ||
If we want to provide Pod to Pod connectivity between different clusters we just have to do the same in each node. | ||
|
||
We can obtain the routes using kubectl: | ||
|
||
{{< codeFromInline lang="bash" >}} | ||
$ kubectl --context kind-clusterA get nodes -o=jsonpath='{range .items[*]}{"ip route add "}{.spec.podCIDR}{" via "}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}' | ||
ip route add 10.110.0.0/24 via 172.17.0.4 | ||
ip route add 10.110.1.0/24 via 172.17.0.3 | ||
ip route add 10.110.2.0/24 via 172.17.0.2 | ||
|
||
$kubectl --context kind-clusterB get nodes -o=jsonpath='{range .items[*]}{"ip route add "}{.spec.podCIDR}{" via "}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}' | ||
ip route add 10.120.0.0/24 via 172.17.0.7 | ||
ip route add 10.120.1.0/24 via 172.17.0.6 | ||
ip route add 10.120.2.0/24 via 172.17.0.5 | ||
|
||
{{< /codeFromInline >}} | ||
|
||
Then we just need to install the routes obtained from cluterA in each node of clusterB and vice versa, it can be automated with a script like this: | ||
|
||
{{< codeFromInline lang="bash" >}} | ||
for c in "clusterA clusterB"; do | ||
for n in $(kind get nodes --name ${c}); do | ||
# Add static routes to the pods in the other cluster | ||
docker exec ${n} ip route add <POD_SUBNET> via <NODE_IP> | ||
# Add static route to the service in the other cluster | ||
# We just need to add one route only for services | ||
docker exec ${n} ip route add <SCV_SUBNET> via <NODE_IP> | ||
... | ||
done | ||
{{< /codeFromInline >}} | ||
|
||
### Example: Emulate external VMs | ||
|
||
By default Docker will attach all containers to the **docker0** bridge: | ||
|
||
{{< codeFromInline lang="bash" >}} | ||
$ docker run -d --name alpine alpine tail -f /dev/null | ||
8b94e9dabea847c004ce9fd7a69cdbc82eb93e31857c25c0a8872706efb08a4d | ||
$ docker exec -it alpine ip a | ||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 | ||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 | ||
inet 127.0.0.1/8 scope host lo | ||
valid_lft forever preferred_lft forever | ||
inet6 ::1/128 scope host | ||
valid_lft forever preferred_lft forever | ||
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP | ||
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff | ||
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 | ||
{{< /codeFromInline >}} | ||
|
||
That means that Pods will be able to reach other Docker containers that does not belong to any KIND cluster, however, the Docker container will not be able to answer to the Pod IP address until we install the corresponding routes. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since you are referring to multiple containers, use |
||
|
||
We can solve it installing routes in the new containers to the Pod Subnets in each Node, as we explained in previous section. | ||
|
||
### Example: Multiple network interfaces and Multi-Home Nodes | ||
|
||
There can be scenarios that requite multiple interfaces in the KIND nodes to test multi-homing, VLANS, CNI plugins, etc. | ||
|
||
Typically, you will want to use loopback addresses for communication. We can configure those loopback addresses after the cluster has been created, and then modify the Kubernetes components to use them. | ||
|
||
When creating the cluster we must add the loopback IP address of the control plane to the certificate SAN (the apiserver binds to "all-interfaces" by default): | ||
|
||
```yaml | ||
kind: Cluster | ||
apiVersion: kind.x-k8s.io/v1alpha4 | ||
# add the loopback to apiServer cert SANS | ||
kubeadmConfigPatchesJSON6902: | ||
- group: kubeadm.k8s.io | ||
kind: ClusterConfiguration | ||
patch: | | ||
- op: add | ||
path: /apiServer/certSANs/- | ||
value: my-loopback | ||
``` | ||
|
||
In order to create the network interfaces, you can use tools like [koko](https://github.com/redhat-nfvpe/koko) to create new networking interfaces on the KIND nodes, you can check several examples of creating complex topologies with containers in this repo https://github.com/aojea/frr-lab. | ||
|
||
Other alternative is [using Docker user defined bridges](https://docs.docker.com/network/bridge/#connect-a-container-to-a-user-defined-bridge): | ||
|
||
```sh | ||
LOOPBACK_PREFIX="1.1.1." | ||
MY_BRIDGE="my_net2" | ||
MY_ROUTE=10.0.0.0/24 | ||
MY_GW=172.16.17.1 | ||
# Create 2nd network | ||
docker network create ${MY_BRIDGE} | ||
# Configure nodes to use the second network | ||
for n in $(kind get nodes); do | ||
# Connect the node to the second network | ||
docker network connect ${MY_BRIDGE} ${n} | ||
# Configure a loopback address | ||
docker exec ${n} ip addr add ${LOOPBACK_PREFIX}${i}/32 dev lo | ||
# Add static routes | ||
docker exec ${n} ip route add ${MY_ROUTE} via {$MY_GW} | ||
done | ||
``` | ||
|
||
After the cluster has been created, we have to modify, in the control-plane node, the kube-apiserver `--advertise-address` flag in the static pod manifest in `/etc/kubernetes/manifests/kube-apiserver.yaml` (once you write the file it restarts the pod with the new config): | ||
|
||
```yaml | ||
apiVersion: v1 | ||
kind: Pod | ||
metadata: | ||
creationTimestamp: null | ||
labels: | ||
component: kube-apiserver | ||
tier: control-plane | ||
name: kube-apiserver | ||
namespace: kube-system | ||
spec: | ||
containers: | ||
- command: | ||
- kube-apiserver | ||
- --advertise-address=172.17.0.4 | ||
``` | ||
|
||
and then change the `node-ip` flag for the kubelets on all the nodes: | ||
|
||
``` | ||
root@kind-worker:/# more /var/lib/kubelet/kubeadm-flags.env | ||
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock --fail-swap-on=false --node-ip=172.17.0.4" | ||
``` | ||
|
||
Finally restart the kubelets to use the new configuration with `systemctl restart kubelet`. | ||
|
||
It's important to note that calling `kubeadm init / join` again on the node will override `/var/lib/kubelet/kubeadm-flags.env`. An [alternative is to use /etc/default/kubelet](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd). |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this supposed to be 220? Also why are there three results here when each cluster has two nodes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
heh, good catch on both things, is 220 and the config should have 3 nodes