This example shows that NS Clients and NSE on the one node can find each other.
NS Clients and NSE are using the kernel
mechanism to connect to its local ovs forwarder.
The NS Client connections are multiplexed over single veth pair interface on the NSE side.
Make sure that you have completed steps from ovs setup. There is more consumption of heap memory by NSE pod due to vpp process when host is configured with hugepage, so in this case NSE pod should be created with memory limit > 2.2 GB.
Create test namespace:
NAMESPACE=($(kubectl create -f https://raw.githubusercontent.com/networkservicemesh/deployments-k8s/041ba2468fb8177f53926af0eab984850aa682c2/examples/use-cases/namespace.yaml)[0])
NAMESPACE=${NAMESPACE:10}
Select node to deploy NSC and NSE:
NODE=($(kubectl get nodes -o go-template='{{range .items}}{{ if not .spec.taints }}{{index .metadata.labels "kubernetes.io/hostname"}} {{end}}{{end}}')[0])
Create customization file:
cat > kustomization.yaml <<EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ${NAMESPACE}
bases:
- https://github.com/networkservicemesh/deployments-k8s/apps/nsc-kernel?ref=041ba2468fb8177f53926af0eab984850aa682c2
- https://github.com/networkservicemesh/deployments-k8s/apps/nse-vlan-vpp?ref=041ba2468fb8177f53926af0eab984850aa682c2
patchesStrategicMerge:
- patch-nsc.yaml
- patch-nse.yaml
EOF
Create NSC patch:
cat > patch-nsc.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nsc-kernel
spec:
replicas: 2
template:
spec:
containers:
- name: nsc
env:
- name: NSM_NETWORK_SERVICES
value: kernel://vlan-vpp-responder/nsm-1
nodeSelector:
kubernetes.io/hostname: ${NODE}
EOF
Create NSE patch:
cat > patch-nse.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nse-kernel
spec:
template:
spec:
containers:
- name: nse
env:
- name: NSM_CIDR_PREFIX
value: 172.16.1.100/30
nodeSelector:
kubernetes.io/hostname: ${NODE}
EOF
Deploy NSC and NSE:
kubectl apply -k .
Wait for applications ready:
kubectl wait --for=condition=ready --timeout=1m pod -l app=nsc-kernel -n ${NAMESPACE}
kubectl wait --for=condition=ready --timeout=1m pod -l app=nse-kernel -n ${NAMESPACE}
Choose one ns client pod and nse pod by labels:
NSC=$((kubectl get pods -l app=nsc-kernel -n ${NAMESPACE} --template '{{range .items}}{{.metadata.name}}{{" "}}{{end}}') | cut -d' ' -f1)
TARGET_IP=$(kubectl exec -ti ${NSC} -n ${NAMESPACE} -- ip route show | grep 172.16 | cut -d' ' -f1)
NSE=$(kubectl get pods -l app=nse-kernel -n ${NAMESPACE} --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
Ping from NSC to NSE:
kubectl exec ${NSC} -n ${NAMESPACE} -- ping -c 4 ${TARGET_IP}
Delete ns:
kubectl delete ns ${NAMESPACE}