Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs: Formatting #310

Merged
merged 2 commits into from
Dec 4, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 37 additions & 24 deletions docs/Manual/Tutorials/Kubernetes/AKS.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,22 +7,27 @@
* [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-apt?view=azure-cli-latest)

## Deploy cluster

* In Azure dashboard choose **Create a resource**
* Choose **Kubernetes Service**

## Cluster basics (version >=1.10)

![basics](./aks-create-basics.png)

## Cluster authentication (Enable RBAC)

![basics](./aks-create-auth.png)

## Wait for cluster to be created

![basics](./aks-create-valid.png)

## Move to control using `kubectl`

* Login to Azure using CLI
```
- Login to Azure using CLI

```
$ az login
[
{
Expand All @@ -38,42 +43,49 @@
}
}
]
```
```

- Get AKS credentials to merge with local config, using resource group and
cluster names used for above deployment

* Get AKS credentials to merge with local config, using resource group and cluster names used for above deployment
```
$ az aks get-credentials --resource-group clifton --name ArangoDB
```
```
$ az aks get-credentials --resource-group clifton --name ArangoDB
```

* Verify successful merge
```
- Verify successful merge

```
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 38m
```
```

- Initialize `helm`

* Initialise `helm`
```
```
$ kubectl create serviceaccount --namespace kube-system tiller
serviceaccount/tiller created
```
```
```

```
$ kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
```
```
```

```
$ helm init --service-account tiller
$HELM_HOME has been configured at /home/kaveh/.helm.
$HELM_HOME has been configured at /home/xxx/.helm.
...
Happy Helming!
Tiller (the Helm server-side component) has been
installed into your Kubernetes Cluster.
```
```

- Deploy ArangoDB operator

* Deploy ArabgoDB operator
```
```
$ helm install \
github.com/arangodb/kube-arangodb/releases/download/X.X.X/kube-arangodb.tgz
NAME: orderly-hydra
Expand All @@ -83,9 +95,10 @@
...
See https://docs.arangodb.com/devel/Manual/Tutorials/Kubernetes/
for how to get started.
```
```

- Deploy ArangoDB cluster

* Deploy ArangoDB cluster
```
```
$ kubectl apply -f https://raw.githubusercontent.com/arangodb/kube-arangodb/master/examples/simple-cluster.yaml
```
```
14 changes: 9 additions & 5 deletions docs/Manual/Tutorials/Kubernetes/EKS.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@ $ aws --version

### Configure AWS client

Refer to the [documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) to accordingly fill in the below with your credentials.
Refer to the [AWS documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html)
to accordingly fill in the below with your credentials.
Pay special attention to the correct region information to find your cluster next.

```
Expand Down Expand Up @@ -75,7 +76,8 @@ $ kubectl get nodes
### Create worker Stack

On Amazon EKS, we need to launch worker nodes, as the cluster has none.
Open Amazon's [cloud formation console](https://console.aws.amazon.com/cloudformation/) and choose `Create Stack` by specifying this S3 template URL:
Open Amazon's [cloud formation console](https://console.aws.amazon.com/cloudformation/)
and choose `Create Stack` by specifying this S3 template URL:

```
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-eks-nodegroup.yaml
Expand All @@ -85,7 +87,9 @@ https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-e

### Worker stack details

Pay good attention to details here. If your input is not complete, your worker nodes are either not spawned or you won't be able to integrate the workers into your kubernetes cluster.
Pay good attention to details here. If your input is not complete, your worker
nodes are either not spawned or you won't be able to integrate the workers
into your kubernetes cluster.

**Stack name**: Choose a name for your stack. For example ArangoDB-stack

Expand All @@ -101,7 +105,7 @@ Pay good attention to details here. If your input is not complete, your worker n

**NodeInstanceType**: Choose an instance type for your worker nodes. For this test we went with the default `t2.medium` instances.

**NodeImageId**: Dependent on the region, there are two image Ids for boxes with and wothout GPU support.
**NodeImageId**: Dependent on the region, there are two image Ids for boxes with and without GPU support.

| Region | without GPU | with GPU |
|-----------|-----------------------|-----------------------|
Expand Down Expand Up @@ -180,7 +184,7 @@ $ kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
```
* Initialise `helm`
* Initialize `helm`
```
$ helm init --service-account tiller
$HELM_HOME has been configured at ~/.helm.
Expand Down
4 changes: 2 additions & 2 deletions docs/Manual/Tutorials/Kubernetes/GKE.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ Click on `CREATE CLUSTER`.

In the form that follows, enter information as seen in the screenshot below.

![create a cluser](./gke-create-cluster.png)
![create a cluster](./gke-create-cluster.png)

We've succesfully ran clusters with 4 `1 vCPU` nodes or 3 `2 vCPU` nodes.
We have successfully ran clusters with 4 `1 vCPU` nodes or 3 `2 vCPU` nodes.
Smaller node configurations will likely lead to unschedulable `Pods`.

Once you click `Create`, you'll return to the list of clusters and your
Expand Down