Skip to content
This repository has been archived by the owner on Jun 6, 2024. It is now read-only.

scf on kind

Victor Cuadrado Juan edited this page Jul 2, 2019 · 10 revisions

This is a guide on how to run scf on kind.

Kind runs Kubernetes components as docker containers inside a docker container. Your whole cluster will live inside one container. It supports multiple nodes but all processes will be local process on your host system.

Get kind

# Make sure you use the lastest release here (no "latest" url because they are still pre-relelases)
wget https://github.com/kubernetes-sigs/kind/releases/download/0.2.1/kind-linux-amd64
mv kind-linux-amd64 kind
chmod +x kind

Create a cluster (with one node)

./kind create cluster

Get your kubeconfig

cluster_name=$(./kind get clusters)
cp $(./kind get kubeconfig-path --name="$cluster_name") kubeconfig
export KUBECONFIG=$PWD/kubeconfig

Add role bindings for scf

Permissions here are too open but we are building a temprorary local cluster, so it should be ok.

kubectl create clusterrolebinding admin --clusterrole=cluster-admin --user=system:serviceaccount:kube-system:default
kubectl create clusterrolebinding uaaadmin --clusterrole=cluster-admin --user=system:serviceaccount:uaa:default
kubectl create clusterrolebinding scfadmin --clusterrole=cluster-admin --user=system:serviceaccount:scf:default

Setup hostpath storage class with dynamic provisioning

After this PR it shouldn't be needed: https://github.com/kubernetes-sigs/kind/pull/397/files Relevant issue: https://github.com/kubernetes-sigs/kind/issues/118 Our template below comes from this: https://github.com/kubernetes/kubernetes/issues/52441#issuecomment-361355696

cat > storageclass.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: hostpath-provisioner
  namespace: kube-system
---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: hostpath-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: hostpath-provisioner
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: hostpath-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: hostpath-provisioner
  apiGroup: rbac.authorization.k8s.io
---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: hostpath-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: hostpath-provisioner
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: hostpath-provisioner
subjects:
- kind: ServiceAccount
  name: hostpath-provisioner
---

# -- Create a pod in the kube-system namespace to run the host path provisioner
apiVersion: v1
kind: Pod
metadata:
  namespace: kube-system
  name: hostpath-provisioner
spec:
  serviceAccountName: hostpath-provisioner
  containers:
    - name: hostpath-provisioner
      image: mazdermind/hostpath-provisioner:latest
      imagePullPolicy: "IfNotPresent"
      env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: PV_DIR
          value: /mnt/kubernetes-pv-manual

      volumeMounts:
        - name: pv-volume
          mountPath: /mnt/kubernetes-pv-manual
  volumes:
    - name: pv-volume
      hostPath:
        path: /mnt/kubernetes-pv-manual
---

# -- Create the standard storage class for running on-node hostpath storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  namespace: kube-system
  name: persistent
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
provisioner: hostpath
EOF

kubectl delete storageclass standard
kubectl create -f storageclass.yaml

Deploy tiller

helm init --upgrade
watch helm version
# Wait until the "Server" part appears

Create an scf-config-values.yaml file [DIEGO]

If you are deploying with Diego:

container_id=$(docker ps -f "name=${cluster_name}-control-plane" -q)
container_ip=$(docker inspect $container_id | jq -r .[0].NetworkSettings.Networks.bridge.IPAddress)

cat > scf-config-values.yaml <<EOF
env:
  # Enter the domain you created for your CAP cluster
  DOMAIN: ${container_ip}.nip.io


  # UAA host and port
  UAA_HOST: uaa.${container_ip}.nip.io
  UAA_PORT: 2793

sizing:
  cc_uploader:
    capabilities: ["SYS_RESOURCE"]
  diego_api:
    capabilities: ["SYS_RESOURCE"]
  diego_brain:
    capabilities: ["SYS_RESOURCE"]
  diego_ssh:
    capabilities: ["SYS_RESOURCE"]
  nats:
    capabilities: ["SYS_RESOURCE"]
  router:
    capabilities: ["SYS_RESOURCE"]
  routing_api:
    capabilities: ["SYS_RESOURCE"]

kube:
  # The IP address assigned to the kube node pointed to by the domain.
  external_ips: ["${container_ip}"]

  # Run kubectl get storageclasses
  # to view your available storage classes
  storage_class:
    persistent: "persistent"
    shared: "shared"

  # The registry the images will be fetched from.
  # The values below should work for
  # a default installation from the SUSE registry.
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"
  auth: none

secrets:
  # Create a password for your CAP cluster
  CLUSTER_ADMIN_PASSWORD: password

  # Create a password for your UAA client secret
  UAA_ADMIN_CLIENT_SECRET: password
EOF

Deploy metrics server [EIRINI]

helm install stable/metrics-server --name=metrics-server --set args[0]="--kubelet-preferred-address-types=InternalIP" --set args[1]="--kubelet-insecure-tls"

Create an scf-config-values.yaml file [EIRINI]

If you are deploying with Eirini:

container_id=$(docker ps -f "name=${cluster_name}-control-plane" -q)
container_ip=$(docker inspect $container_id | jq -r .[0].NetworkSettings.Networks.bridge.IPAddress)

cat > scf-config-values.yaml <<EOF
env:
  # Enter the domain you created for your CAP cluster
  DOMAIN: ${container_ip}.nip.io


  # UAA host and port
  UAA_HOST: uaa.${container_ip}.nip.io
  UAA_PORT: 2793

enable:
  eirini: true

kube:
  # The IP address assigned to the kube node pointed to by the domain.
  external_ips: ["${container_ip}"]

  # Run kubectl get storageclasses
  # to view your available storage classes
  storage_class:
    persistent: "persistent"
    shared: "shared"

  # The registry the images will be fetched from.
  # The values below should work for
  # a default installation from the SUSE registry.
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"
  auth: rbac

secrets:
  # Create a password for your CAP cluster
  CLUSTER_ADMIN_PASSWORD: password

  # Create a password for your UAA client secret
  UAA_ADMIN_CLIENT_SECRET: password
EOF

NOTE: If you use auth: none in the file above some components for Eirini won't be deployed correctly.

Create the eirini namespace [EIRINI]

kubectl create namespace eirini

Deploy UAA

helm repo add suse https://kubernetes-charts.suse.com/
helm install suse/uaa --name susecf-uaa --namespace uaa --values scf-config-values.yaml
watch -c 'kubectl get pods --namespace uaa'

Deploy SCF

If the UAA pods are up and ready you can continue deploying scf:

SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
-o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

helm install helm/cf --name susecf-scf --namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

watch kubectl get pods -n scf

Login to scf

If all pods in the scf namespace are up and ready you can login to scf:

cf api --skip-ssl-validation https://api.${container_ip}.nip.io
cf login -u admin -p password -o system
cf create-space default
cf target -s default

Deploy 12factor app

git clone https://github.com/scf-samples/12factor.git
cd 12factor
cf push

Your app should be reachable on your browser:

xdg-open https://12factor.${container_ip}.nip.io

Delete the cluster

./kind delete cluster --name=${cluster_name}

Notes

You can create a cluster with more than one nodes. In this case you will need to setup some other storage class because hostpath will not work with scf.

cat > config-ha.yaml <<EOF
# technically valid, config file with a full ha cluster
kind: Config
apiVersion: kind.sigs.k8s.io/v1alpha2
nodes:
- role: control-plane
  replicas: 1
- role: worker
  replicas: 2
EOF

kind create cluster --config config-ha.yaml

Run scf compiled from sources in kind

Kind can be used to deploy SCF compiled from sources. This can be useful to run tests, or a live-run on a particular set of changes when e.g. on the host VMs can't be used.

The main disadvantage of this solution is that the docker images built from the host machine are not available to the kind nodes, so we need to copy them manually after compiling them, example workflow:

# Assumes .envrc and custom variables are loaded
# and kind configured as described so far 

export DOMAIN="${container_ip}.nip.io"
export VAGRANT_EXTERNAL_IP="${container_ip}"

make vagrant-prep

# After prep, copy images to kind cluster
IFS=$'\n'
for i in $(docker images -a)
do
    img=$(echo $i | awk '{ print $1 ":" $2 }')
    if echo $img | grep -q $FISSILE_DOCKER_ORGANIZATION; then
        echo "Loading $img to kind node"
        ./kind load docker-image $img
    fi
done

kubectl create namespace eirini

make run-eirini