Skip to content

Commit

Permalink
AUTO: Sync Kubernetes docs to ScalarDB Enterprise docs site repo
Browse files Browse the repository at this point in the history
  • Loading branch information
josh-wong committed Feb 26, 2024
1 parent 3d82404 commit bdd10d6
Show file tree
Hide file tree
Showing 53 changed files with 7,288 additions and 236 deletions.
29 changes: 2 additions & 27 deletions docs/3.12/scalar-kubernetes/AccessScalarProducts.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,44 +42,34 @@ If you deploy your application (client) in the same Kubernetes cluster as Scalar
The following are examples of ScalarDB and ScalarDL deployments on the `ns-scalar` namespace:

* **ScalarDB Server**

```console
scalardb-envoy.ns-scalar.svc.cluster.local
```

* **ScalarDL Ledger**

```console
scalardl-ledger-envoy.ns-scalar.svc.cluster.local
```
* **ScalarDL Auditor**

```console
scalardl-auditor-envoy.ns-scalar.svc.cluster.local
```

When using the Kubernetes service resource, you must set the above FQDN in the properties file for the application (client) as follows:

* **Client properties file for ScalarDB Server**

```properties
scalar.db.contact_points=<HELM_RELEASE_NAME>-envoy.<NAMESPACE>.svc.cluster.local
scalar.db.contact_port=60051
scalar.db.storage=grpc
scalar.db.transaction_manager=grpc
```

* **Client properties file for ScalarDL Ledger**

```properties
scalar.dl.client.server.host=<HELM_RELEASE_NAME>-envoy.<NAMESPACE>.svc.cluster.local
scalar.dl.ledger.server.port=50051
scalar.dl.ledger.server.privileged_port=50052
```

* **Client properties file for ScalarDL Ledger with ScalarDL Auditor mode enabled**

```properties
# Ledger
scalar.dl.client.server.host=<HELM_RELEASE_NAME>-envoy.<NAMESPACE>.svc.cluster.local
Expand All @@ -104,24 +94,19 @@ For more details on how to configure your custom values file, see [Service confi
When using a load balancer, you must set the FQDN or IP address of the load balancer in the properties file for the application (client) as follows.

* **Client properties file for ScalarDB Server**

```properties
scalar.db.contact_points=<LOAD_BALANCER_FQDN_OR_IP_ADDRESS>
scalar.db.contact_port=60051
scalar.db.storage=grpc
scalar.db.transaction_manager=grpc
```

* **Client properties file for ScalarDL Ledger**

```properties
scalar.dl.client.server.host=<LOAD_BALANCER_FQDN_OR_IP_ADDRESS>
scalar.dl.ledger.server.port=50051
scalar.dl.ledger.server.privileged_port=50052
```

* **Client properties file for ScalarDL Ledger with ScalarDL Auditor mode enabled**

```properties
# Ledger
scalar.dl.client.server.host=<LOAD_BALANCER_FQDN_OR_IP_ADDRESS>
Expand All @@ -147,48 +132,37 @@ The concrete implementation of the load balancer and access method depend on the

You can run client requests to ScalarDB or ScalarDL from a bastion server by running the `kubectl port-forward` command. If you create a ScalarDL Auditor mode environment, however, you must run two `kubectl port-forward` commands with different kubeconfig files from one bastion server to access two Kubernetes clusters.

1. **(ScalarDL Auditor mode only)** In the bastion server for ScalarDL Ledger, configure an existing kubeconfig file or add a new kubeconfig file to access the Kubernetes cluster for ScalarDL Auditor. For details on how to configure the kubeconfig file of each managed Kubernetes cluster, see [Configure kubeconfig](./CreateBastionServer.md#configure-kubeconfig).
1. **(ScalarDL Auditor mode only)** In the bastion server for ScalarDL Ledger, configure an existing kubeconfig file or add a new kubeconfig file to access the Kubernetes cluster for ScalarDL Auditor. For details on how to configure the kubeconfig file of each managed Kubernetes cluster, see [Configure kubeconfig](CreateBastionServer.md#configure-kubeconfig).
2. Configure port forwarding to each service from the bastion server.
* **ScalarDB Server**

```console
kubectl port-forward -n <NAMESPACE> svc/<RELEASE_NAME>-envoy 60051:60051
```

* **ScalarDL Ledger**

```console
kubectl --context <CONTEXT_IN_KUBERNETES_FOR_SCALARDL_LEDGER> port-forward -n <NAMESPACE> svc/<RELEASE_NAME>-envoy 50051:50051
kubectl --context <CONTEXT_IN_KUBERNETES_FOR_SCALARDL_LEDGER> port-forward -n <NAMESPACE> svc/<RELEASE_NAME>-envoy 50052:50052
```
* **ScalarDL Auditor**

```console
kubectl --context <CONTEXT_IN_KUBERNETES_FOR_SCALARDL_AUDITOR> port-forward -n <NAMESPACE> svc/<RELEASE_NAME>-envoy 40051:40051
kubectl --context <CONTEXT_IN_KUBERNETES_FOR_SCALARDL_AUDITOR> port-forward -n <NAMESPACE> svc/<RELEASE_NAME>-envoy 40052:40052
```

3. Configure the properties file to access ScalarDB or ScalarDL via `localhost`.
* **Client properties file for ScalarDB Server**

```properties
scalar.db.contact_points=localhost
scalar.db.contact_port=60051
scalar.db.storage=grpc
scalar.db.transaction_manager=grpc
```

* **Client properties file for ScalarDL Ledger**

```properties
scalar.dl.client.server.host=localhost
scalar.dl.ledger.server.port=50051
scalar.dl.ledger.server.privileged_port=50052
```

* **Client properties file for ScalarDL Ledger with ScalarDL Auditor mode enabled**

```properties
# Ledger
scalar.dl.client.server.host=localhost
Expand All @@ -201,3 +175,4 @@ You can run client requests to ScalarDB or ScalarDL from a bastion server by run
scalar.dl.auditor.server.port=40051
scalar.dl.auditor.server.privileged_port=40052
```

22 changes: 3 additions & 19 deletions docs/3.12/scalar-kubernetes/BackupNoSQL.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,10 @@ In this guide, we assume that you are using point-in-time recovery (PITR) or its
* **The ScalarDB or ScalarDL pod names in the `NAME` column.** Write down the pod names so that you can compare those names with the pod names after performing the backup.
* **The ScalarDB or ScalarDL pod status is `Running` in the `STATUS` column.** Confirm that the pods are running before proceeding with the backup. You will need to pause the pods in the next step.
* **The restart count of each pod in the `RESTARTS` column.** Write down the restart count of each pod so that you can compare the count with the restart counts after performing the backup.
2. Pause the ScalarDB or ScalarDL pods by using `scalar-admin`. For details on how to pause the pods, see the [Details on using `scalar-admin`](./BackupNoSQL.md#details-on-using-scalar-admin) section in this guide.
2. Pause the ScalarDB or ScalarDL pods by using `scalar-admin`. For details on how to pause the pods, see the [Details on using `scalar-admin`](BackupNoSQL.md#details-on-using-scalar-admin) section in this guide.
3. Write down the `pause completed` time. You will need to refer to that time when restoring the data by using the PITR feature.
4. Back up each database by using the backup feature. If you have enabled the automatic backup and PITR features, the managed databases will perform back up automatically. Please note that you should wait for approximately 10 seconds so that you can create a sufficiently long period to avoid a clock skew issue between the client clock and the database clock. This 10-second period is the exact period in which you can restore data by using the PITR feature.
5. Unpause ScalarDB or ScalarDL pods by using `scalar-admin`. For details on how to unpause the pods, see the [Details on using `scalar-admin`](./BackupNoSQL.md#details-on-using-scalar-admin) section in this guide.
5. Unpause ScalarDB or ScalarDL pods by using `scalar-admin`. For details on how to unpause the pods, see the [Details on using `scalar-admin`](BackupNoSQL.md#details-on-using-scalar-admin) section in this guide.
6. Check the `unpause started` time. You must check the `unpause started` time to confirm the exact period in which you can restore data by using the PITR feature.
7. Check the pod status after performing the backup. You must check the following four points by using the `kubectl get pod` command after the backup operation is completed.
* **The number of ScalarDB or ScalarDL pods.** Confirm this number matches the number of pods that you wrote down before performing the backup.
Expand All @@ -25,7 +25,7 @@ In this guide, we assume that you are using point-in-time recovery (PITR) or its
* **The restart count of each pod in the `RESTARTS` column.** Confirm the counts match the restart counts that you wrote down before performing the backup

**If any of the two values are different, you must retry the backup operation from the beginning.** The reason for the different values may be caused by some pods being added or restarted while performing the backup. In such case, those pods will run in the `unpause` state. Pods in the `unpause` state will cause the backup data to be transactionally inconsistent.
8. **(Amazon DynamoDB only)** If you use the PITR feature of DynamoDB, you will need to perform additional steps to create a backup because the feature restores data with another name table by using PITR. For details on the additional steps after creating the exact period in which you can restore the data, please see [Restore databases in a Kubernetes environment](./RestoreDatabase.md#amazon-dynamodb).
8. **(Amazon DynamoDB only)** If you use the PITR feature of DynamoDB, you will need to perform additional steps to create a backup because the feature restores data with another name table by using PITR. For details on the additional steps after creating the exact period in which you can restore the data, please see [Restore databases in a Kubernetes environment](RestoreDatabase.md#amazon-dynamodb).

## Back up multiple databases

Expand All @@ -43,19 +43,14 @@ If you use Scalar Helm Charts to deploy ScalarDB or ScalarDL, the `my-svc` and `

* Example
* ScalarDB Server

```console
_scalardb._tcp.<helm release name>-headless.<namespace>.svc.cluster.local
```

* ScalarDL Ledger

```console
_scalardl-admin._tcp.<helm release name>-headless.<namespace>.svc.cluster.local
```

* ScalarDL Auditor

```console
_scalardl-auditor-admin._tcp.<helm release name>-headless.<namespace>.svc.cluster.local
```
Expand Down Expand Up @@ -95,19 +90,14 @@ You can send a pause request to ScalarDB or ScalarDL pods in a Kubernetes enviro

* Example
* ScalarDB Server

```console
kubectl run scalar-admin-pause --image=ghcr.io/scalar-labs/scalar-admin:<tag> --restart=Never -it -- -c pause -s _scalardb._tcp.<helm release name>-headless.<namespace>.svc.cluster.local
```

* ScalarDL Ledger

```console
kubectl run scalar-admin-pause --image=ghcr.io/scalar-labs/scalar-admin:<tag> --restart=Never -it -- -c pause -s _scalardl-admin._tcp.<helm release name>-headless.<namespace>.svc.cluster.local
```

* ScalarDL Auditor

```console
kubectl run scalar-admin-pause --image=ghcr.io/scalar-labs/scalar-admin:<tag> --restart=Never -it -- -c pause -s _scalardl-auditor-admin._tcp.<helm release name>-headless.<namespace>.svc.cluster.local
```
Expand All @@ -118,19 +108,14 @@ You can send an unpause request to ScalarDB or ScalarDL pods in a Kubernetes env

* Example
* ScalarDB Server

```console
kubectl run scalar-admin-unpause --image=ghcr.io/scalar-labs/scalar-admin:<tag> --restart=Never -it -- -c unpause -s _scalardb._tcp.<helm release name>-headless.<namespace>.svc.cluster.local
```

* ScalarDL Ledger

```console
kubectl run scalar-admin-unpause --image=ghcr.io/scalar-labs/scalar-admin:<tag> --restart=Never -it -- -c unpause -s _scalardl-admin._tcp.<helm release name>-headless.<namespace>.svc.cluster.local
```

* ScalarDL Auditor

```console
kubectl run scalar-admin-unpause --image=ghcr.io/scalar-labs/scalar-admin:<tag> --restart=Never -it -- -c unpause -s _scalardl-auditor-admin._tcp.<helm release name>-headless.<namespace>.svc.cluster.local
```
Expand All @@ -142,7 +127,6 @@ The `scalar-admin` pods output the `pause completed` time and `unpause started`
```console
kubectl logs scalar-admin-pause
```

```console
kubectl logs scalar-admin-unpause
```
6 changes: 3 additions & 3 deletions docs/3.12/scalar-kubernetes/BackupRDB.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

This guide explains how to create a backup of a single relational database (RDB) that ScalarDB or ScalarDL uses in a Kubernetes environment. Please note that this guide assumes that you are using a managed database from a cloud services provider.

If you have two or more RDBs that the [Multi-storage Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/multi-storage-transactions.md) or [Two-phase Commit Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/two-phase-commit-transactions.md) feature uses, you must follow the instructions in [Back up a NoSQL database in a Kubernetes environment](./BackupNoSQL.md) instead.
If you have two or more RDBs that the [Multi-storage Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/multi-storage-transactions.md) or [Two-phase Commit Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/two-phase-commit-transactions.md) feature uses, you must follow the instructions in [Back up a NoSQL database in a Kubernetes environment](BackupNoSQL.md) instead.

## Perform a backup

To perform backups, you should enable the automated backup feature available in the managed databases. By enabling this feature, you do not need to perform any additional backup operations. For details on the backup configurations in each managed database, see the following guides:

* [Set up a database for ScalarDB/ScalarDL deployment on AWS](./SetupDatabaseForAWS.md)
* [Set up a database for ScalarDB/ScalarDL deployment on Azure](./SetupDatabaseForAzure.md)
* [Set up a database for ScalarDB/ScalarDL deployment on AWS](SetupDatabaseForAWS.md)
* [Set up a database for ScalarDB/ScalarDL deployment on Azure](SetupDatabaseForAzure.md)

Because the managed RDB keeps backup data consistent from a transactions perspective, you can restore backup data to any point in time by using the point-in-time recovery (PITR) feature in the managed RDB.
8 changes: 4 additions & 4 deletions docs/3.12/scalar-kubernetes/BackupRestoreGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,14 @@ How you perform backup and restore depends on the type of database (NoSQL or RDB

#### NoSQL or multiple databases

If you are using a NoSQL database, or if you have two or more databases that the [Multi-storage Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/multi-storage-transactions.md) or [Two-phase Commit Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/two-phase-commit-transactions.md) feature uses, please see [Back up a NoSQL database in a Kubernetes environment](./BackupNoSQL.md) for details on how to perform a backup.
If you are using a NoSQL database, or if you have two or more databases that the [Multi-storage Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/multi-storage-transactions.md) or [Two-phase Commit Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/two-phase-commit-transactions.md) feature uses, please see [Back up a NoSQL database in a Kubernetes environment](BackupNoSQL.md) for details on how to perform a backup.

#### Single RDB

If you are using a single RDB, please see [Back up an RDB in a Kubernetes environment](./BackupRDB.md) for details on how to perform a backup.
If you are using a single RDB, please see [Back up an RDB in a Kubernetes environment](BackupRDB.md) for details on how to perform a backup.

If you have two or more RDBs that the [Multi-storage Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/multi-storage-transactions.md) or [Two-phase Commit Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/two-phase-commit-transactions.md) feature uses, you must follow the instructions in [Back up a NoSQL database in a Kubernetes environment](./BackupNoSQL.md) instead.
If you have two or more RDBs that the [Multi-storage Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/multi-storage-transactions.md) or [Two-phase Commit Transactions](https://github.com/scalar-labs/scalardb/blob/master/docs/two-phase-commit-transactions.md) feature uses, you must follow the instructions in [Back up a NoSQL database in a Kubernetes environment](BackupNoSQL.md) instead.

## Restore a database

For details on how to restore data from a managed database, please see [Restore databases in a Kubernetes environment](./RestoreDatabase.md).
For details on how to restore data from a managed database, please see [Restore databases in a Kubernetes environment](RestoreDatabase.md).
2 changes: 1 addition & 1 deletion docs/3.12/scalar-kubernetes/CreateAKSClusterForScalarDB.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Guidelines for creating an AKS cluster for ScalarDB Server

This document explains the requirements and recommendations for creating an Azure Kubernetes Service (AKS) cluster for ScalarDB Server deployment. For details on how to deploy ScalarDB Server on an AKS cluster, see [Deploy ScalarDB Server on AKS](./ManualDeploymentGuideScalarDBServerOnAKS.md).
This document explains the requirements and recommendations for creating an Azure Kubernetes Service (AKS) cluster for ScalarDB Server deployment. For details on how to deploy ScalarDB Server on an AKS cluster, see [Deploy ScalarDB Server on AKS](ManualDeploymentGuideScalarDBServerOnAKS.md).

## Before you begin

Expand Down
2 changes: 1 addition & 1 deletion docs/3.12/scalar-kubernetes/CreateAKSClusterForScalarDL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Guidelines for creating an AKS cluster for ScalarDL Ledger

This document explains the requirements and recommendations for creating an Azure Kubernetes Service (AKS) cluster for ScalarDL Ledger deployment. For details on how to deploy ScalarDL Ledger on an AKS cluster, see [Deploy ScalarDL Ledger on AKS](./ManualDeploymentGuideScalarDLOnAKS.md).
This document explains the requirements and recommendations for creating an Azure Kubernetes Service (AKS) cluster for ScalarDL Ledger deployment. For details on how to deploy ScalarDL Ledger on an AKS cluster, see [Deploy ScalarDL Ledger on AKS](ManualDeploymentGuideScalarDLOnAKS.md).

## Before you begin

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Guidelines for creating an AKS cluster for ScalarDL Ledger and ScalarDL Auditor

This document explains the requirements and recommendations for creating an Azure Kubernetes Service (AKS) cluster for ScalarDL Ledger and ScalarDL Auditor deployment. For details on how to deploy ScalarDL Ledger and ScalarDL Auditor on an AKS cluster, see [Deploy ScalarDL Ledger and ScalarDL Auditor on AKS](./ManualDeploymentGuideScalarDLAuditorOnAKS.md).
This document explains the requirements and recommendations for creating an Azure Kubernetes Service (AKS) cluster for ScalarDL Ledger and ScalarDL Auditor deployment. For details on how to deploy ScalarDL Ledger and ScalarDL Auditor on an AKS cluster, see [Deploy ScalarDL Ledger and ScalarDL Auditor on AKS](ManualDeploymentGuideScalarDLAuditorOnAKS.md).

## Before you begin

Expand All @@ -21,7 +21,7 @@ When deploying ScalarDL Ledger and ScalarDL Auditor, you must:
* Configure a virtual network (VNet) as follows.
* Connect the **VNet of AKS (for Ledger)** and the **VNet of AKS (for Auditor)** by using [virtual network peering](https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-manage-peering). To do so, you must specify the different IP ranges for the **VNet of AKS (for Ledger)** and the **VNet of AKS (for Auditor)** when you create those VNets.
* Allow **connections between Ledger and Auditor** to make ScalarDL (Auditor mode) work properly.
* For more details about these network requirements, refer to [Configure Network Peering for ScalarDL Auditor Mode](./NetworkPeeringForScalarDLAuditor.md).
* For more details about these network requirements, refer to [Configure Network Peering for ScalarDL Auditor Mode](NetworkPeeringForScalarDLAuditor.md).

{% capture notice--warning %}
**Attention**
Expand Down
Loading

0 comments on commit bdd10d6

Please sign in to comment.