Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deploy csm neg scirpt and yaml #882

Merged
merged 1 commit into from
Oct 2, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
141 changes: 141 additions & 0 deletions docs/deploy/gke/csm/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
# Overview

This document will deploy the self managed Ingress-GCE controller in CSM(Cloud Service Mesh
) mode.

# Prepare the Cluster

The cluster should satisfy the following restrictions:
* GKE version 1.14+
* [IP Alias](https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips) eanbeld
* Default Ingress-GCE Controller disabled.
* [GKE Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) enabled.

## [Option 1] Ceate a new Cluster

```sh
gcloud container clusters create $CLUSTER --enable-ip-alias --cluster-version 1.14 \
--zone $ZONE --addons=HorizontalPodAutoscaling \
--identity-namespace=PROJECT.svc.id.goog
```

## [Option 2] Updating an existing cluster

```sh
# disable Ingress-GCE controller
gcloud container clusters update $CLUSTER --zone=$ZONE --update-addons=HttpLoadBalancing=DISABLED

# enable Workload Identity
gcloud beta container clusters update $CLUSTER --zone $ZONE \
--identity-namespace=${PROJECT}.svc.id.goog

# update node pool
gcloud beta container node-pools update $NODEPOOL_NAME \
--cluster=$CLUSTER \
--workload-metadata-from-node=GKE_METADATA_SERVER
```

# Create a service account

```sh
# create a service account
gcloud iam service-accounts create glbc-service-account \
--display-name "Service Account for GLBC" --project $PROJECT

# binding compute.admin role to the service account
gcloud projects add-iam-policy-binding $PROJECT \
--member serviceAccount:glbc-service-account@${PROJECT}.iam.gserviceaccount.com \
--role roles/compute.admin

# binding the service account to k8s service account
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:${PROJECT}.svc.id.goog[kube-system/glbc]" \
glbc-service-account@${PROJECT}.iam.gserviceaccount.com
```

# Create K8s Roles

```sh
# Grant permission to current GCP user to create new k8s ClusterRoles.
kubectl create clusterrolebinding one-binding-to-rule-them-all \
--clusterrole=cluster-admin \
--user=$(gcloud config list --project $PROJECT --format 'value(core.account)' 2>/dev/null)

kubectl create -f rbac.yaml

kubectl annotate serviceaccount \
--namespace kube-system glbc \
iam.gke.io/gcp-service-account=glbc-service-account@${PROJECT}.iam.gserviceaccount.com
```

# Generate configmap for ingress controller

## [Option 1] Generate gce.conf with the generate.sh script

```sh
./generate.sh -n $CLUSTER -z $ZONE -p $PROJECT

kubectl create configmap gce-config --from-file=gce.conf -n kube-system
```

## [Option 2] Raw CMDs

```sh
NETWORK_NAME=$(basename $(gcloud container clusters describe \
$CLUSTER --project $PROJECT --zone=$ZONE \
--format='value(networkConfig.network)'))

SUBNETWORK_NAME=$(basename $(gcloud container clusters describe \
$CLUSTER --project $PROJECT \
--zone=$ZONE --format='value(networkConfig.subnetwork)'))

INSTANCE_GROUP=$(gcloud container clusters describe $CLUSTER --project $PROJECT --zone=$ZONE \
--format='flattened(nodePools[].instanceGroupUrls[].scope().segment())' | \
cut -d ':' -f2 | tr -d '[:space:]')

INSTANCE=$(gcloud compute instance-groups list-instances $INSTANCE_GROUP --project $PROJECT \
--zone=$ZONE --format="value(instance)" --limit 1)

NETWORK_TAGS=$(gcloud compute instances describe $INSTANCE --project \
$PROJECT --format="value(tags.items)")

cat <<EOF >> gce.conf
[global]
token-url = nil
project-id = $PROJECT
network-name = $NETWORK_NAME
subnetwork-name = $SUBNETWORK_NAME
node-instance-prefix = gke-$CLUSTER
node-tags = $NETWORK_TAGS
local-zone = $ZONE
EOF

kubectl create configmap gce-config --from-file=gce.conf -n kube-system
```

# Deploy Ingress controller

```sh
kubectl create -f default-http-backend.yaml
kubectl create -f glbc.yaml
```

# Clean up

```sh
kubectl delete -f default-http-backend.yaml
kubectl delete -f glbc.yaml
kubectl delete configmap gce-config -n kube-system
kubectl delete -f rbac.yaml
kubectl delete clusterrolebinding one-binding-to-rule-them-all
```

## [Optional] Delete service account
```sh
gcloud iam service-accounts delete glbc-service-account@${PROJECT}.iam.gserviceaccount.com

gcloud projects remove-iam-policy-binding $PROJECT \
--member serviceAccount:glbc-service-account@${PROJECT}.iam.gserviceaccount.com \
--role roles/compute.admin
```
66 changes: 66 additions & 0 deletions docs/deploy/gke/csm/default-http-backend.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: l7-default-backend
namespace: kube-system
labels:
k8s-app: glbc
kubernetes.io/name: "GLBC"
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: glbc
template:
metadata:
labels:
k8s-app: glbc
name: glbc
spec:
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: k8s.gcr.io/defaultbackend-amd64:1.5
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
# This must match the --default-backend-service argument of the l7 lb
# controller and is required because GCE mandates a default backend.
name: default-http-backend
namespace: kube-system
labels:
k8s-app: glbc
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "GLBCDefaultBackend"
spec:
# The default backend must be of type NodePort.
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: glbc

14 changes: 14 additions & 0 deletions docs/deploy/gke/csm/gce.conf.temp
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
[global]
token-url = nil
# Your cluster's project
project-id = [PROJECT]
# Your cluster's network
network-name = [NETWORK]
# Your cluster's subnetwork
subnetwork-name = [SUBNETWORK]
# Prefix for your cluster's IG
node-instance-prefix = gke-[CLUSTER_NAME]
# Network tags for your cluster's IG
node-tags = [NETWORK_TAGS]
# Zone the cluster lives in
local-zone = [ZONE]
91 changes: 91 additions & 0 deletions docs/deploy/gke/csm/generate.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
#!/bin/bash

function usage() {
echo "Usage: ./generate.sh -n CLUSTER -z ZONE [-p PROJECT]"
echo
echo "Generates gce.conf used by glbc.yaml"
echo
echo " -p, --project-name Name of the project (Optional)"
echo " -n, --cluster-name Name of the cluster (Required)"
echo " -z, --zone Zone the cluster is in (Required)"
echo " --help Display this help and exit"
exit
}

function arg_check {
# Check that the necessary arguments were provided and that they are correct.
if [[ -z "$ZONE" || -z "$CLUSTER_NAME" ]];
then
usage
fi
}

while [[ $# -gt 0 ]]
do
key="$1"
case $key in
-h|--help)
usage
shift
shift
;;
-n|--cluster-name)
CLUSTER_NAME=$2
shift
shift
;;
-p|--project-name)
PROJECT_ID=$2
shift
shift
;;
-z|--zone)
ZONE=$2
shift
shift
;;
*)
echo "Unknown argument $1"
echo
usage
;;
esac
done

if [[ -z $PROJECT_ID ]]; then
# Get the project id associated with the cluster.
PROJECT_ID=`gcloud config list --format 'value(core.project)' 2>/dev/null`
fi

arg_check

# Populate gce.conf.gen from our template.
if [[ -z $NETWORK_NAME ]]; then
NETWORK_NAME=$(basename $(gcloud container clusters describe $CLUSTER_NAME --project $PROJECT_ID --zone=$ZONE \
--format='value(networkConfig.network)'))
fi
if [[ -z $SUBNETWORK_NAME ]]; then
SUBNETWORK_NAME=$(basename $(gcloud container clusters describe $CLUSTER_NAME --project $PROJECT_ID \
--zone=$ZONE --format='value(networkConfig.subnetwork)'))
fi

# Getting network tags is painful. Get the instance groups, map to an instance,
# and get the node tag from it (they should be the same across all nodes -- we don't
# know how to handle it, otherwise).
if [[ -z $NETWORK_TAGS ]]; then
INSTANCE_GROUP=$(gcloud container clusters describe $CLUSTER_NAME --project $PROJECT_ID --zone=$ZONE \
--format='flattened(nodePools[].instanceGroupUrls[].scope().segment())' | \
cut -d ':' -f2 | tr -d '[:space:])
INSTANCE=$(gcloud compute instance-groups list-instances $INSTANCE_GROUP --project $PROJECT_ID \
--zone=$ZONE --format="value(instance)" --limit 1)
NETWORK_TAGS=$(gcloud compute instances describe $INSTANCE --project $PROJECT_ID --format="value(tags.items)")
fi

sed "s/\[PROJECT\]/$PROJECT_ID/" gce.conf.temp | \
sed "s/\[NETWORK\]/$NETWORK_NAME/" | \
sed "s/\[SUBNETWORK\]/$SUBNETWORK_NAME/" | \
sed "s/\[CLUSTER_NAME\]/$CLUSTER_NAME/" | \
sed "s/\[NETWORK_TAGS\]/$NETWORK_TAGS/" | \
sed "s/\[ZONE\]/$ZONE/" > gce.conf

echo "Generated gce.conf for cluster: $CLUSTER_NAME"
78 changes: 78 additions & 0 deletions docs/deploy/gke/csm/glbc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: l7-lb-controller
namespace: kube-system
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
labels:
k8s-app: gcp-lb-controller
kubernetes.io/name: "GLBC"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: gcp-lb-controller
template:
metadata:
labels:
k8s-app: gcp-lb-controller
name: gcp-lb-controller
spec:
serviceAccountName: glbc
terminationGracePeriodSeconds: 600
containers:
- image: k8s.gcr.io/ingress-gce-glbc-amd64:v1.7.0
livenessProbe:
httpGet:
path: /healthz
port: 8086
scheme: HTTP
initialDelaySeconds: 30
# healthz reaches out to GCE
periodSeconds: 30
timeoutSeconds: 15
successThreshold: 1
failureThreshold: 5
name: l7-lb-controller
volumeMounts:
- mountPath: /etc/gce/
name: gce-config-volume
resources:
# Request is set to accommodate this pod alongside the other
# master components on a single core master.
# TODO: Make resource requirements depend on the size of the cluster
requests:
cpu: 10m
memory: 50Mi
command:
- /glbc
- -v2
- --config-file-path=/etc/gce/gce.conf
- --healthz-port=8086
- --logtostderr
- --sync-period=600s
- --gce-ratelimit=ga.Operations.Get,qps,10,100
- --gce-ratelimit=alpha.Operations.Get,qps,10,100
- --gce-ratelimit=beta.Operations.Get,qps,10,100
- --gce-ratelimit=ga.BackendServices.Get,qps,1.8,1
- --gce-ratelimit=beta.BackendServices.Get,qps,1.8,1
- --gce-ratelimit=ga.HealthChecks.Get,qps,1.8,1
- --gce-ratelimit=alpha.HealthChecks.Get,qps,1.8,1
- --gce-ratelimit=beta.NetworkEndpointGroups.Get,qps,1.8,1
- --gce-ratelimit=beta.NetworkEndpointGroups.AttachNetworkEndpoints,qps,1.8,1
- --gce-ratelimit=beta.NetworkEndpointGroups.DetachNetworkEndpoints,qps,1.8,1
- --gce-ratelimit=beta.NetworkEndpointGroups.ListNetworkEndpoints,qps,1.8,1
- --gce-ratelimit=ga.NetworkEndpointGroups.Get,qps,1.8,1
- --gce-ratelimit=ga.NetworkEndpointGroups.AttachNetworkEndpoints,qps,1.8,1
- --gce-ratelimit=ga.NetworkEndpointGroups.DetachNetworkEndpoints,qps,1.8,1
- --gce-ratelimit=ga.NetworkEndpointGroups.ListNetworkEndpoints,qps,1.8,1
- --enable-csm=true
- --csm-service-skip-namespaces=kube-system,istio-system
volumes:
- name: gce-config-volume
configMap:
name: gce-config
items:
- key: gce.conf
path: gce.conf
Loading