Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

*: update URLs for deploy #1048

Merged
merged 21 commits into from
Apr 24, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 12 additions & 12 deletions TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,21 +35,21 @@
- [Import Example Database](dev/how-to/get-started/import-example-database.md)
- [Read Historical Data](dev/how-to/get-started/read-historical-data.md)
- Deploy
- [Hardware Recommendations](op-guide/recommendation.md)
- [Hardware Recommendations](dev/how-to/deploy/hardware-recommendations.md)
+ From Binary Tarball
- [For testing environments](op-guide/binary-testing-deployment.md)
- [For production environments](op-guide/binary-deployment.md)
- [For testing environments](dev/how-to/deploy/from-tarball/testing-environment.md)
- [For production environments](dev/how-to/deploy/from-tarball/production-environment.md)
+ Orchestrated Deployment
- [Ansible Deployment (Recommended)](op-guide/ansible-deployment.md)
- [Ansible Offline Deployment](op-guide/offline-ansible-deployment.md)
- [Docker Deployment](op-guide/docker-deployment.md)
- [Kubernetes Deployment](op-guide/kubernetes.md)
- [Overview of Ansible Operations](op-guide/ansible-operation.md)
- [Ansible Deployment (Recommended)](dev/how-to/deploy/orchestrated/ansible.md)
- [Ansible Offline Deployment](dev/how-to/deploy/orchestrated/offline-ansible.md)
- [Docker Deployment](dev/how-to/deploy/orchestrated/docker.md)
- [Kubernetes Deployment](dev/how-to/deploy/orchestrated/kubernetes.md)
- [Overview of Ansible Operations](dev/how-to/deploy/orchestrated/ansible-operations.md)
+ Geographic Redundancy
- [Overview](op-guide/cross-dc-deployment.md)
- [Configure Location Awareness](op-guide/location-awareness.md)
- [TiSpark](tispark/tispark-quick-start-guide.md)
- [Data Migration with Ansible](tools/dm/deployment.md)
- [Overview](dev/how-to/deploy/geographic-redundancy/overview.md)
- [Configure Location Awareness](dev/how-to/deploy/geographic-redundancy/location-awareness.md)
- [TiSpark](dev/how-to/deploy/tispark.md)
- [Data Migration with Ansible](dev/how-to/deploy/data-migration-with-ansible.md)
+ Secure
- [Security Compatibility with MySQL](sql/security-compatibility.md)
- [The TiDB Access Privilege System](sql/privilege.md)
Expand Down
11 changes: 6 additions & 5 deletions op-guide/binary-deployment.md → ...oy/from-tarball/production-environment.md
100755 → 100644
Original file line number Diff line number Diff line change
@@ -1,26 +1,27 @@
---
title: Production Deployment from Binary Tarball
summary: Use the binary to deploy a TiDB cluster.
category: operations
category: how-to
aliases: ['/docs/op-guide/binary-deployment/']
---

# Production Deployment from Binary Tarball

This guide provides installation instructions from a binary tarball on Linux. A complete TiDB cluster contains PD, TiKV, and TiDB. To start the database service, follow the order of PD -> TiKV -> TiDB. To stop the database service, follow the order of stopping TiDB -> TiKV -> PD.

See also [local deployment](../op-guide/binary-local-deployment.md) and [testing enviroment](../op-guide/binary-testing-deployment.md) deployment.
See also [local deployment](/dev/how-to/get-started/local-cluster/install-from-binary.md) and [testing environment](/dev/how-to/deploy/from-tarball/testing-environment.md) deployment.

## Prepare

Before you start, see [TiDB architecture](/overview.md#tidb-architecture) and [Software and Hardware Requirements](/op-guide/recommendation.md). Make sure the following requirements are satisfied:
Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). Make sure the following requirements are satisfied:

### Operating system

For the operating system, it is recommended to use RHEL/CentOS 7.3 or higher. The following additional requirements are recommended:

| Configuration | Description |
| :-- | :-------------------- |
| Supported Platform | RHEL/CentOS 7.3+ ([more details](/op-guide/recommendation.md)) |
| Supported Platform | RHEL/CentOS 7.3+ ([more details](/dev/how-to/deploy/hardware-recommendations.md)) |
| File System | ext4 is recommended |
| Swap Space | Should be disabled |
| Disk Block Size | Set the system disk `Block` size to `4096` |
Expand Down Expand Up @@ -119,7 +120,7 @@ $ cd tidb-latest-linux-amd64

## Multiple nodes cluster deployment

For the production environment, multiple nodes cluster deployment is recommended. Before you begin, see [Software and Hardware Requirements](/op-guide/recommendation.md).
For the production environment, multiple nodes cluster deployment is recommended. Before you begin, see [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md).

Assuming that you have six nodes, you can deploy 3 PD instances, 3 TiKV instances, and 1 TiDB instance. See the following table for details:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,26 +1,27 @@
---
title: Testing Deployment from Binary Tarball
summary: Use the binary to deploy a TiDB cluster.
category: operations
category: how-to
aliases: ['/docs/op-guide/binary-testing-deployment/']
---

# Testing Deployment from Binary Tarball

This guide provides installation instructions for all TiDB components across multiple nodes for testing purposes. It does not match the recommended usage for production systems.

See also [local deployment](../op-guide/binary-local-deployment.md) and [production enviroment](../op-guide/binary-deployment.md) deployment.
See also [local deployment](/dev/how-to/get-started/local-cluster/install-from-binary.md) and [production environment](/dev/how-to/deploy/from-tarball/production-environment.md) deployment.

## Prepare

Before you start, see [TiDB architecture](/overview.md#tidb-architecture) and [Software and Hardware Requirements](/op-guide/recommendation.md). Make sure the following requirements are satisfied:
Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). Make sure the following requirements are satisfied:

### Operating system

For the operating system, it is recommended to use RHEL/CentOS 7.3 or higher. The following additional requirements are recommended:

| Configuration | Description |
| :-- | :-------------------- |
| Supported Platform | RHEL/CentOS 7.3+ ([more details](/op-guide/recommendation.md)) |
| Supported Platform | RHEL/CentOS 7.3+ ([more details](/dev/how-to/deploy/hardware-recommendations.md)) |
| File System | ext4 is recommended |
| Swap Space | Should be disabled |
| Disk Block Size | Set the system disk `Block` size to `4096` |
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
---
title: Cluster Topology Configuration
summary: Learn to configure cluster topology to maximize the capacity for disaster recovery.
category: operations
category: how-to
aliases: ['/docs/op-guide/location-awareness/']
---

# Cluster Topology Configuration
Expand All @@ -10,7 +11,7 @@ category: operations

PD schedules according to the topology of the TiKV cluster to maximize the TiKV's capability for disaster recovery.

Before you begin, see [Deploy TiDB Using Ansible (Recommended)](../op-guide/ansible-deployment.md) and [Deploy TiDB Using Docker](../op-guide/docker-deployment.md).
Before you begin, see [Deploy TiDB Using Ansible (Recommended)](/dev/how-to/deploy/orchestrated/ansible.md) and [Deploy TiDB Using Docker](/dev/how-to/deploy/orchestrated/docker.md).

## TiKV reports the topological information

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
title: Cross-DC Deployment Solutions
category: deployment
category: how-to
aliases: ['/docs/op-guide/cross-dc-deployment/']
---

# Cross-DC Deployment Solutions
Expand All @@ -11,13 +12,13 @@ As a NewSQL database, TiDB excels in the best features of the traditional relati

TiDB, TiKV and PD are distributed among 3 DCs, which is the most common deployment solution with the highest availability.

![3-DC Deployment Architecture](../media/deploy-3dc.png)
![3-DC Deployment Architecture](/media/deploy-3dc.png)

### Advantages

All the replicas are distributed among 3 DCs. Even if one DC is down, the other 2 DCs will initiate leader election and resume service within a reasonable amount of time (within 20s in most cases) and no data is lost. See the following diagram for more information:

![Disaster Recovery for 3-DC Deployment](../media/deploy-3dc-dr.png)
![Disaster Recovery for 3-DC Deployment](/media/deploy-3dc-dr.png)

### Disadvantages

Expand All @@ -31,13 +32,13 @@ The performance is greatly limited by the network latency.

If not all of the three DCs need to provide service to the applications, you can dispatch all the requests to one DC and configure the scheduling policy to migrate all the TiKV Region leader and PD leader to the same DC, as what we have done in the following test. In this way, neither obtaining TSO or reading TiKV Regions will be impacted by the network latency between DCs. If this DC is down, the PD leader and Region leader will be automatically elected in other surviving DCs, and you just need to switch the requests to the DC that are still online.

![Read Performance Optimized 3-DC Deployment](../media/deploy-3dc-optimize.png)
![Read Performance Optimized 3-DC Deployment](/media/deploy-3dc-optimize.png)

## 3-DC in 2 cities Deployment Solution

This solution is similar to the previous 3-DC deployment solution and can be considered as an optimization based on the business scenario. The difference is that the distance between the 2 DCs within the same city is short and thus the latency is very low. In this case, we can dispatch the requests to the two DCs within the same city and configure the TiKV leader and PD leader to be in the 2 DCs in the same city.

![2-DC in 2 Cities Deployment Architecture](../media/deploy-2city3dc.png)
![2-DC in 2 Cities Deployment Architecture](/media/deploy-2city3dc.png)

Compared with the 3-DC deployment, the 3-DC in 2 cities deployment has the following advantages:

Expand All @@ -51,11 +52,11 @@ However, the disadvantage is that if the 2 DCs within the same city goes down, w

The 2-DC + Binlog synchronization is similar to the MySQL Master-Slave solution. 2 complete sets of TiDB clusters (each complete set of the TiDB cluster includes TiDB, PD and TiKV) are deployed in 2 DCs, one acts as the Master and one as the Slave. Under normal circumstances, the Master DC handle all the requests and the data written to the Master DC is asynchronously written to the Slave DC via Binlog.

![Data Synchronization in 2-DC in 2 Cities Deployment](../media/deploy-binlog.png)
![Data Synchronization in 2-DC in 2 Cities Deployment](/media/deploy-binlog.png)

If the Master DC goes down, the requests can be switched to the slave cluster. Similar to MySQL, some data might be lost. But different from MySQL, this solution can ensure the high availability within the same DC: if some nodes within the DC are down, the online business won’t be impacted and no manual efforts are needed because the cluster will automatically re-elect leaders to provide services.

![2-DC as a Mutual Backup Deployment](../media/deploy-backup.png)
![2-DC as a Mutual Backup Deployment](/media/deploy-backup.png)

Some of our production users also adopt the 2-DC multi-active solution, which means:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
---
title: Software and Hardware Recommendations
summary: Learn the software and hardware recommendations for deploying and running TiDB.
category: operations
category: how-to
aliases: ['/docs/op-guide/recommendation/']
---

# Software and Hardware Recommendations
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
---
title: TiDB-Ansible Common Operations
summary: Learn some common operations when using TiDB-Ansible to administer a TiDB cluster.
category: operations
category: how-to
aliases: ['/docs/op-guide/ansible-operation/']
---

# TiDB-Ansible Common Operations
Expand Down Expand Up @@ -42,4 +43,5 @@ This operation stops the cluster and cleans up the data directory.

> **Note:**
>
> If the deployment directory is a mount point, an error will be reported, but implementation results remain unaffected, so you can ignore it.
> If the deployment directory is a mount point, an error will be reported, but implementation results remain unaffected, so you can ignore it.

Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
---
title: Deploy TiDB Using Ansible
summary: Use Ansible to deploy a TiDB cluster.
category: operations
category: how-to
aliases: ['/docs/op-guide/ansible-deployment/']
---

# Deploy TiDB Using Ansible
Expand All @@ -18,14 +19,14 @@ You can use the TiDB-Ansible configuration file to set up the cluster topology a

- Initialize operating system parameters
- Deploy the whole TiDB cluster
- [Start the TiDB cluster](../op-guide/ansible-operation.md#start-a-cluster)
- [Stop the TiDB cluster](../op-guide/ansible-operation.md#stop-a-cluster)
- [Modify component configuration](../op-guide/ansible-deployment-rolling-update.md#modify-component-configuration)
- [Scale the TiDB cluster](../op-guide/ansible-deployment-scale.md)
- [Upgrade the component version](../op-guide/ansible-deployment-rolling-update.md#upgrade-the-component-version)
- [Enable the cluster binlog](../tools/tidb-binlog-cluster.md)
- [Clean up data of the TiDB cluster](../op-guide/ansible-operation.md#clean-up-cluster-data)
- [Destroy the TiDB cluster](../op-guide/ansible-operation.md#destroy-a-cluster)
- [Start the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#start-a-cluster)
- [Stop the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#stop-a-cluster)
- [Modify component configuration](/op-guide/ansible-deployment-rolling-update.md#modify-component-configuration)
- [Scale the TiDB cluster](/op-guide/ansible-deployment-scale.md)
- [Upgrade the component version](/op-guide/ansible-deployment-rolling-update.md#upgrade-the-component-version)
- [Enable the cluster binlog](/tools/tidb-binlog-cluster.md)
- [Clean up data of the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#clean-up-cluster-data)
- [Destroy the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#destroy-a-cluster)

## Prepare

Expand All @@ -35,14 +36,14 @@ Before you start, make sure you have:

- 4 or more machines

A standard TiDB cluster contains 6 machines. You can use 4 machines for testing. For more details, see [Software and Hardware Requirements](../op-guide/recommendation.md).
A standard TiDB cluster contains 6 machines. You can use 4 machines for testing. For more details, see [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md).

- CentOS 7.3 (64 bit) or later, x86_64 architecture (AMD64)
- Network between machines

> **Note:**
>
> When you deploy TiDB using Ansible, **use SSD disks for the data directory of TiKV and PD nodes**. Otherwise, it cannot pass the check. If you only want to try TiDB out and explore the features, it is recommended to [deploy TiDB using Docker Compose](../op-guide/docker-compose.md) on a single machine.
> When you deploy TiDB using Ansible, **use SSD disks for the data directory of TiKV and PD nodes**. Otherwise, it cannot pass the check. If you only want to try TiDB out and explore the features, it is recommended to [deploy TiDB using Docker Compose](/dev/how-to/get-started/local-cluster/install-from-docker-compose.md) on a single machine.

2. A Control Machine that meets the following requirements:

Expand Down Expand Up @@ -374,7 +375,7 @@ You can choose one of the following two types of cluster topology according to y

- [The cluster topology of a single TiKV instance on each TiKV node](#option-1-use-the-cluster-topology-of-a-single-tikv-instance-on-each-tikv-node)

In most cases, it is recommended to deploy one TiKV instance on each TiKV node for better performance. However, if the CPU and memory of your TiKV machines are much better than the required in [Hardware and Software Requirements](../op-guide/recommendation.md), and you have more than two disks in one node or the capacity of one SSD is larger than 2 TB, you can deploy no more than 2 TiKV instances on a single TiKV node.
In most cases, it is recommended to deploy one TiKV instance on each TiKV node for better performance. However, if the CPU and memory of your TiKV machines are much better than the required in [Hardware and Software Requirements](/dev/how-to/deploy/hardware-recommendations.md), and you have more than two disks in one node or the capacity of one SSD is larger than 2 TB, you can deploy no more than 2 TiKV instances on a single TiKV node.

- [The cluster topology of multiple TiKV instances on each TiKV node](#option-2-use-the-cluster-topology-of-multiple-tikv-instances-on-each-tikv-node)

Expand Down Expand Up @@ -538,8 +539,8 @@ To enable the following control variables, use the capitalized `True`. To disabl
| cluster_name | the name of a cluster, adjustable |
| tidb_version | the version of TiDB, configured by default in TiDB-Ansible branches |
| process_supervision | the supervision way of processes, systemd by default, supervise optional |
| timezone | the global default time zone configured when a new TiDB cluster bootstrap is initialized; you can edit it later using the global `time_zone` system variable and the session `time_zone` system variable as described in [Time Zone Support](../sql/time-zone.md); the default value is `Asia/Shanghai` and see [the list of time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) for more optional values |
| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](../op-guide/recommendation.md#network-requirements) to the white list |
| timezone | the global default time zone configured when a new TiDB cluster bootstrap is initialized; you can edit it later using the global `time_zone` system variable and the session `time_zone` system variable as described in [Time Zone Support](/sql/time-zone.md); the default value is `Asia/Shanghai` and see [the list of time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) for more optional values |
| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](/dev/how-to/deploy/hardware-recommendations.md#network-requirements) to the white list |
| enable_ntpd | to monitor the NTP service of the managed node, True by default; do not close it |
| set_hostname | to edit the hostname of the managed node based on the IP, False by default |
| enable_binlog | whether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the `zookeeper_addrs` variable |
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
---
title: Deploy TiDB Using Docker
summary: Use Docker to manually deploy a multi-node TiDB cluster on multiple machines.
category: operations
category: how-to
aliases: ['/docs/op-guide/docker-deployment/']
---

# Deploy TiDB Using Docker

This page shows you how to manually deploy a multi-node TiDB cluster on multiple machines using Docker.

To learn more, see [TiDB architecture](../overview.md#tidb-architecture) and [Software and Hardware Requirements](../op-guide/recommendation.md).
To learn more, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md).

## Preparation

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
---
title: TiDB Deployment on Kubernetes
summary: Use TiDB Operator to quickly deploy a TiDB cluster on Kubernetes
category: operations
category: how-to
aliases: ['/docs/op-guide/kubernetes/']
---

# TiDB Deployment on Kubernetes
Expand Down
Loading