diff --git a/docs/Manual/Tutorials/Kubernetes/AKS.md b/docs/Manual/Tutorials/Kubernetes/AKS.md index 11b27466b..a6fd87233 100644 --- a/docs/Manual/Tutorials/Kubernetes/AKS.md +++ b/docs/Manual/Tutorials/Kubernetes/AKS.md @@ -7,22 +7,27 @@ * [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-apt?view=azure-cli-latest) ## Deploy cluster + * In Azure dashboard choose **Create a resource** * Choose **Kubernetes Service** ## Cluster basics (version >=1.10) + ![basics](./aks-create-basics.png) ## Cluster authentication (Enable RBAC) + ![basics](./aks-create-auth.png) ## Wait for cluster to be created + ![basics](./aks-create-valid.png) ## Move to control using `kubectl` -* Login to Azure using CLI -``` +- Login to Azure using CLI + + ``` $ az login [ { @@ -38,42 +43,49 @@ } } ] -``` + ``` + +- Get AKS credentials to merge with local config, using resource group and + cluster names used for above deployment -* Get AKS credentials to merge with local config, using resource group and cluster names used for above deployment -``` - $ az aks get-credentials --resource-group clifton --name ArangoDB -``` + ``` + $ az aks get-credentials --resource-group clifton --name ArangoDB + ``` -* Verify successful merge -``` +- Verify successful merge + + ``` $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 443/TCP 38m -``` + ``` + +- Initialize `helm` -* Initialise `helm` -``` + ``` $ kubectl create serviceaccount --namespace kube-system tiller serviceaccount/tiller created -``` -``` + ``` + + ``` $ kubectl create clusterrolebinding tiller-cluster-rule \ --clusterrole=cluster-admin --serviceaccount=kube-system:tiller clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created -``` -``` + ``` + + ``` $ helm init --service-account tiller - $HELM_HOME has been configured at /home/kaveh/.helm. + $HELM_HOME has been configured at /home/xxx/.helm. ... Happy Helming! Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. -``` + ``` + +- Deploy ArangoDB operator -* Deploy ArabgoDB operator -``` + ``` $ helm install \ github.com/arangodb/kube-arangodb/releases/download/X.X.X/kube-arangodb.tgz NAME: orderly-hydra @@ -83,9 +95,10 @@ ... See https://docs.arangodb.com/devel/Manual/Tutorials/Kubernetes/ for how to get started. -``` + ``` + +- Deploy ArangoDB cluster -* Deploy ArangoDB cluster -``` + ``` $ kubectl apply -f https://raw.githubusercontent.com/arangodb/kube-arangodb/master/examples/simple-cluster.yaml -``` + ``` diff --git a/docs/Manual/Tutorials/Kubernetes/EKS.md b/docs/Manual/Tutorials/Kubernetes/EKS.md index c8aa31cc7..d506392e0 100644 --- a/docs/Manual/Tutorials/Kubernetes/EKS.md +++ b/docs/Manual/Tutorials/Kubernetes/EKS.md @@ -23,7 +23,8 @@ $ aws --version ### Configure AWS client -Refer to the [documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) to accordingly fill in the below with your credentials. +Refer to the [AWS documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) +to accordingly fill in the below with your credentials. Pay special attention to the correct region information to find your cluster next. ``` @@ -75,7 +76,8 @@ $ kubectl get nodes ### Create worker Stack On Amazon EKS, we need to launch worker nodes, as the cluster has none. -Open Amazon's [cloud formation console](https://console.aws.amazon.com/cloudformation/) and choose `Create Stack` by specifying this S3 template URL: +Open Amazon's [cloud formation console](https://console.aws.amazon.com/cloudformation/) +and choose `Create Stack` by specifying this S3 template URL: ``` https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-eks-nodegroup.yaml @@ -85,7 +87,9 @@ https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-e ### Worker stack details -Pay good attention to details here. If your input is not complete, your worker nodes are either not spawned or you won't be able to integrate the workers into your kubernetes cluster. +Pay good attention to details here. If your input is not complete, your worker +nodes are either not spawned or you won't be able to integrate the workers +into your kubernetes cluster. **Stack name**: Choose a name for your stack. For example ArangoDB-stack @@ -101,7 +105,7 @@ Pay good attention to details here. If your input is not complete, your worker n **NodeInstanceType**: Choose an instance type for your worker nodes. For this test we went with the default `t2.medium` instances. -**NodeImageId**: Dependent on the region, there are two image Ids for boxes with and wothout GPU support. +**NodeImageId**: Dependent on the region, there are two image Ids for boxes with and without GPU support. | Region | without GPU | with GPU | |-----------|-----------------------|-----------------------| @@ -180,7 +184,7 @@ $ kubectl create clusterrolebinding tiller-cluster-rule \ --clusterrole=cluster-admin --serviceaccount=kube-system:tiller clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created ``` -* Initialise `helm` +* Initialize `helm` ``` $ helm init --service-account tiller $HELM_HOME has been configured at ~/.helm. diff --git a/docs/Manual/Tutorials/Kubernetes/GKE.md b/docs/Manual/Tutorials/Kubernetes/GKE.md index b948ca99d..0bd2f80a0 100644 --- a/docs/Manual/Tutorials/Kubernetes/GKE.md +++ b/docs/Manual/Tutorials/Kubernetes/GKE.md @@ -15,9 +15,9 @@ Click on `CREATE CLUSTER`. In the form that follows, enter information as seen in the screenshot below. -![create a cluser](./gke-create-cluster.png) +![create a cluster](./gke-create-cluster.png) -We've succesfully ran clusters with 4 `1 vCPU` nodes or 3 `2 vCPU` nodes. +We have successfully ran clusters with 4 `1 vCPU` nodes or 3 `2 vCPU` nodes. Smaller node configurations will likely lead to unschedulable `Pods`. Once you click `Create`, you'll return to the list of clusters and your