From d12f00b6238db91d84e8a8eb1476c6601736bddf Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 10:12:24 -0600 Subject: [PATCH 01/17] Moved how-to/deply files to new URLs --- TOC.md | 24 +++++++++---------- .../from-tarball/production-environment.md | 3 ++- .../from-tarball/testing-environment.md | 3 ++- .../location-awareness.md | 3 ++- .../deploy/geographic-redundancy/overview.md | 3 ++- .../how-to/deploy/hardware-recommendations.md | 3 ++- .../deploy/orchestrated/ansible-operations.md | 5 ++-- .../how-to/deploy/orchestrated/ansible.md | 3 ++- .../how-to/deploy/orchestrated/docker.md | 3 ++- .../how-to/deploy/orchestrated}/kubernetes.md | 3 ++- .../deploy/orchestrated/offline-ansible.md | 5 ++-- .../how-to/deploy/tispark.md | 3 ++- 12 files changed, 36 insertions(+), 25 deletions(-) rename op-guide/binary-deployment.md => dev/how-to/deploy/from-tarball/production-environment.md (99%) rename op-guide/binary-testing-deployment.md => dev/how-to/deploy/from-tarball/testing-environment.md (99%) rename {op-guide => dev/how-to/deploy/geographic-redundancy}/location-awareness.md (98%) rename op-guide/cross-dc-deployment.md => dev/how-to/deploy/geographic-redundancy/overview.md (98%) rename op-guide/recommendation.md => dev/how-to/deploy/hardware-recommendations.md (98%) rename op-guide/ansible-operation.md => dev/how-to/deploy/orchestrated/ansible-operations.md (92%) rename op-guide/ansible-deployment.md => dev/how-to/deploy/orchestrated/ansible.md (99%) rename op-guide/docker-deployment.md => dev/how-to/deploy/orchestrated/docker.md (98%) rename {op-guide => dev/how-to/deploy/orchestrated}/kubernetes.md (97%) rename op-guide/offline-ansible-deployment.md => dev/how-to/deploy/orchestrated/offline-ansible.md (98%) rename tispark/tispark-quick-start-guide.md => dev/how-to/deploy/tispark.md (98%) diff --git a/TOC.md b/TOC.md index 453e827d607f3..d4a2d2a30b59b 100644 --- a/TOC.md +++ b/TOC.md @@ -33,21 +33,21 @@ - [Import Sample Database](bikeshare-example-database.md) - [Read Historical Data](op-guide/history-read.md) - Deploy - - [Hardware Recommendations](op-guide/recommendation.md) + - [Hardware Recommendations](dev/how-to/deploy/hardware-recommendations.md) + From Binary Tarball - - [For testing environments](op-guide/binary-testing-deployment.md) - - [For production environments](op-guide/binary-deployment.md) + - [For testing environments](dev/how-to/deploy/from-tarball/testing-environment.md) + - [For production environments](dev/how-to/deploy/from-tarball/production-environment.md) + Orchestrated Deployment - - [Ansible Deployment (Recommended)](op-guide/ansible-deployment.md) - - [Ansible Offline Deployment](op-guide/offline-ansible-deployment.md) - - [Docker Deployment](op-guide/docker-deployment.md) - - [Kubernetes Deployment](op-guide/kubernetes.md) - - [Overview of Ansible Operations](op-guide/ansible-operation.md) + - [Ansible Deployment (Recommended)](dev/how-to/deploy/orchestrated/ansible.md) + - [Ansible Offline Deployment](dev/how-to/deploy/orchestrated/offline-ansible.md) + - [Docker Deployment](dev/how-to/deploy/orchestrated/docker.md) + - [Kubernetes Deployment](dev/how-to/deploy/orchestrated/kubernetes.md) + - [Overview of Ansible Operations](dev/how-to/deploy/orchestrated/ansible-operations.md) + Geographic Redundancy - - [Overview](op-guide/cross-dc-deployment.md) - - [Configure Location Awareness](op-guide/location-awareness.md) - - [TiSpark](tispark/tispark-quick-start-guide.md) - - [Data Migration with Ansible](tools/dm/deployment.md) + - [Overview](dev/how-to/deploy/geographic-redundancy/overview.md) + - [Configure Location Awareness](dev/how-to/deploy/geographic-redundancy/location-awareness.md) + - [TiSpark](dev/how-to/deploy/tispark.md) + - [Data Migration with Ansible](dev/how-to/deploy/data-migration-with-ansible.md) + Secure - [Security Compatibility with MySQL](sql/security-compatibility.md) - [The TiDB Access Privilege System](sql/privilege.md) diff --git a/op-guide/binary-deployment.md b/dev/how-to/deploy/from-tarball/production-environment.md similarity index 99% rename from op-guide/binary-deployment.md rename to dev/how-to/deploy/from-tarball/production-environment.md index 3de14a5b2bd8e..abc36664cf19b 100755 --- a/op-guide/binary-deployment.md +++ b/dev/how-to/deploy/from-tarball/production-environment.md @@ -1,7 +1,8 @@ --- title: Production Deployment from Binary Tarball summary: Use the binary to deploy a TiDB cluster. -category: operations +category: how-to +aliases: ['/docs/op-guide/binary-deployment/'] --- # Production Deployment from Binary Tarball diff --git a/op-guide/binary-testing-deployment.md b/dev/how-to/deploy/from-tarball/testing-environment.md similarity index 99% rename from op-guide/binary-testing-deployment.md rename to dev/how-to/deploy/from-tarball/testing-environment.md index 4d943b51d58ff..bfdd2fc399bb3 100644 --- a/op-guide/binary-testing-deployment.md +++ b/dev/how-to/deploy/from-tarball/testing-environment.md @@ -1,7 +1,8 @@ --- title: Testing Deployment from Binary Tarball summary: Use the binary to deploy a TiDB cluster. -category: operations +category: how-to +aliases: ['/docs/op-guide/binary-testing-deployment/'] --- # Testing Deployment from Binary Tarball diff --git a/op-guide/location-awareness.md b/dev/how-to/deploy/geographic-redundancy/location-awareness.md similarity index 98% rename from op-guide/location-awareness.md rename to dev/how-to/deploy/geographic-redundancy/location-awareness.md index 7f470064e138f..4508162ccf5e4 100644 --- a/op-guide/location-awareness.md +++ b/dev/how-to/deploy/geographic-redundancy/location-awareness.md @@ -1,7 +1,8 @@ --- title: Cluster Topology Configuration summary: Learn to configure cluster topology to maximize the capacity for disaster recovery. -category: operations +category: how-to +aliases: ['/docs/op-guide/location-awareness/'] --- # Cluster Topology Configuration diff --git a/op-guide/cross-dc-deployment.md b/dev/how-to/deploy/geographic-redundancy/overview.md similarity index 98% rename from op-guide/cross-dc-deployment.md rename to dev/how-to/deploy/geographic-redundancy/overview.md index e6cb2d52823b0..e4050e7f08410 100644 --- a/op-guide/cross-dc-deployment.md +++ b/dev/how-to/deploy/geographic-redundancy/overview.md @@ -1,6 +1,7 @@ --- title: Cross-DC Deployment Solutions -category: deployment +category: how-to +aliases: ['/docs/op-guide/cross-dc-deployment/'] --- # Cross-DC Deployment Solutions diff --git a/op-guide/recommendation.md b/dev/how-to/deploy/hardware-recommendations.md similarity index 98% rename from op-guide/recommendation.md rename to dev/how-to/deploy/hardware-recommendations.md index 3d1350e9e4bec..8e9212520968d 100644 --- a/op-guide/recommendation.md +++ b/dev/how-to/deploy/hardware-recommendations.md @@ -1,7 +1,8 @@ --- title: Software and Hardware Recommendations summary: Learn the software and hardware recommendations for deploying and running TiDB. -category: operations +category: how-to +aliases: ['/docs/op-guide/recommendation/'] --- # Software and Hardware Recommendations diff --git a/op-guide/ansible-operation.md b/dev/how-to/deploy/orchestrated/ansible-operations.md similarity index 92% rename from op-guide/ansible-operation.md rename to dev/how-to/deploy/orchestrated/ansible-operations.md index 8ecc8af11380f..256ea0e6b422d 100644 --- a/op-guide/ansible-operation.md +++ b/dev/how-to/deploy/orchestrated/ansible-operations.md @@ -1,7 +1,8 @@ --- title: TiDB-Ansible Common Operations summary: Learn some common operations when using TiDB-Ansible to administer a TiDB cluster. -category: operations +category: how-to +aliases: ['/docs/op-guide/ansible-operation/'] --- # TiDB-Ansible Common Operations @@ -40,4 +41,4 @@ $ ansible-playbook unsafe_cleanup.yml This operation stops the cluster and cleans up the data directory. -> **Note:** If the deployment directory is a mount point, an error will be reported, but implementation results remain unaffected, so you can ignore it. \ No newline at end of file +> **Note:** If the deployment directory is a mount point, an error will be reported, but implementation results remain unaffected, so you can ignore it. diff --git a/op-guide/ansible-deployment.md b/dev/how-to/deploy/orchestrated/ansible.md similarity index 99% rename from op-guide/ansible-deployment.md rename to dev/how-to/deploy/orchestrated/ansible.md index b90f5718d8eb5..912cab27cef25 100644 --- a/op-guide/ansible-deployment.md +++ b/dev/how-to/deploy/orchestrated/ansible.md @@ -1,7 +1,8 @@ --- title: Deploy TiDB Using Ansible summary: Use Ansible to deploy a TiDB cluster. -category: operations +category: how-to +aliases: ['/docs/op-guide/ansible-deployment/'] --- # Deploy TiDB Using Ansible diff --git a/op-guide/docker-deployment.md b/dev/how-to/deploy/orchestrated/docker.md similarity index 98% rename from op-guide/docker-deployment.md rename to dev/how-to/deploy/orchestrated/docker.md index 1eec3fc2745d0..6cf168695b307 100644 --- a/op-guide/docker-deployment.md +++ b/dev/how-to/deploy/orchestrated/docker.md @@ -1,7 +1,8 @@ --- title: Deploy TiDB Using Docker summary: Use Docker to manually deploy a multi-node TiDB cluster on multiple machines. -category: operations +category: how-to +aliases: ['/docs/op-guide/docker-deployment/'] --- # Deploy TiDB Using Docker diff --git a/op-guide/kubernetes.md b/dev/how-to/deploy/orchestrated/kubernetes.md similarity index 97% rename from op-guide/kubernetes.md rename to dev/how-to/deploy/orchestrated/kubernetes.md index 55edd0892f613..24f24632c1515 100644 --- a/op-guide/kubernetes.md +++ b/dev/how-to/deploy/orchestrated/kubernetes.md @@ -1,7 +1,8 @@ --- title: TiDB Deployment on Kubernetes summary: Use TiDB Operator to quickly deploy a TiDB cluster on Kubernetes -category: operations +category: how-to +aliases: ['/docs/op-guide/kubernetes/'] --- # TiDB Deployment on Kubernetes diff --git a/op-guide/offline-ansible-deployment.md b/dev/how-to/deploy/orchestrated/offline-ansible.md similarity index 98% rename from op-guide/offline-ansible-deployment.md rename to dev/how-to/deploy/orchestrated/offline-ansible.md index 9a530c0b9f4db..4739a0607acd4 100644 --- a/op-guide/offline-ansible-deployment.md +++ b/dev/how-to/deploy/orchestrated/offline-ansible.md @@ -1,7 +1,8 @@ --- title: Deploy TiDB Offline Using Ansible summary: Use Ansible to deploy a TiDB cluster offline. -category: operations +category: how-to +aliases: ['/docs/op-guide/offline-ansible-deployment/'] --- # Deploy TiDB Offline Using Ansible @@ -165,4 +166,4 @@ See [Edit the `inventory.ini` file to orchestrate the TiDB cluster](../op-guide/ ## Test the TiDB cluster -See [Test the TiDB cluster](../op-guide/ansible-deployment.md#test-the-tidb-cluster). \ No newline at end of file +See [Test the TiDB cluster](../op-guide/ansible-deployment.md#test-the-tidb-cluster). diff --git a/tispark/tispark-quick-start-guide.md b/dev/how-to/deploy/tispark.md similarity index 98% rename from tispark/tispark-quick-start-guide.md rename to dev/how-to/deploy/tispark.md index 08fdc32689c72..07e9ae84e9df3 100644 --- a/tispark/tispark-quick-start-guide.md +++ b/dev/how-to/deploy/tispark.md @@ -1,7 +1,8 @@ --- title: TiSpark Quick Start Guide summary: Learn how to use TiSpark quickly. -category: User Guide +category: how-to +aliases: ['/docs/tispark/tispark-quick-start-guide/'] --- # TiSpark Quick Start Guide From 95df07e7391ec43626b0581304316d3642d563fa Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 10:16:47 -0600 Subject: [PATCH 02/17] Fix links --- dev/how-to/deploy/geographic-redundancy/overview.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/dev/how-to/deploy/geographic-redundancy/overview.md b/dev/how-to/deploy/geographic-redundancy/overview.md index e4050e7f08410..608aebe330c25 100644 --- a/dev/how-to/deploy/geographic-redundancy/overview.md +++ b/dev/how-to/deploy/geographic-redundancy/overview.md @@ -12,13 +12,13 @@ As a NewSQL database, TiDB excels in the best features of the traditional relati TiDB, TiKV and PD are distributed among 3 DCs, which is the most common deployment solution with the highest availability. -![3-DC Deployment Architecture](../media/deploy-3dc.png) +![3-DC Deployment Architecture](/media/deploy-3dc.png) ### Advantages All the replicas are distributed among 3 DCs. Even if one DC is down, the other 2 DCs will initiate leader election and resume service within a reasonable amount of time (within 20s in most cases) and no data is lost. See the following diagram for more information: -![Disaster Recovery for 3-DC Deployment](../media/deploy-3dc-dr.png) +![Disaster Recovery for 3-DC Deployment](/media/deploy-3dc-dr.png) ### Disadvantages @@ -32,13 +32,13 @@ The performance is greatly limited by the network latency. If not all of the three DCs need to provide service to the applications, you can dispatch all the requests to one DC and configure the scheduling policy to migrate all the TiKV Region leader and PD leader to the same DC, as what we have done in the following test. In this way, neither obtaining TSO or reading TiKV Regions will be impacted by the network latency between DCs. If this DC is down, the PD leader and Region leader will be automatically elected in other surviving DCs, and you just need to switch the requests to the DC that are still online. -![Read Performance Optimized 3-DC Deployment](../media/deploy-3dc-optimize.png) +![Read Performance Optimized 3-DC Deployment](/media/deploy-3dc-optimize.png) ## 3-DC in 2 cities Deployment Solution This solution is similar to the previous 3-DC deployment solution and can be considered as an optimization based on the business scenario. The difference is that the distance between the 2 DCs within the same city is short and thus the latency is very low. In this case, we can dispatch the requests to the two DCs within the same city and configure the TiKV leader and PD leader to be in the 2 DCs in the same city. -![2-DC in 2 Cities Deployment Architecture](../media/deploy-2city3dc.png) +![2-DC in 2 Cities Deployment Architecture](/media/deploy-2city3dc.png) Compared with the 3-DC deployment, the 3-DC in 2 cities deployment has the following advantages: @@ -52,11 +52,11 @@ However, the disadvantage is that if the 2 DCs within the same city goes down, w The 2-DC + Binlog synchronization is similar to the MySQL Master-Slave solution. 2 complete sets of TiDB clusters (each complete set of the TiDB cluster includes TiDB, PD and TiKV) are deployed in 2 DCs, one acts as the Master and one as the Slave. Under normal circumstances, the Master DC handle all the requests and the data written to the Master DC is asynchronously written to the Slave DC via Binlog. -![Data Synchronization in 2-DC in 2 Cities Deployment](../media/deploy-binlog.png) +![Data Synchronization in 2-DC in 2 Cities Deployment](/media/deploy-binlog.png) If the Master DC goes down, the requests can be switched to the slave cluster. Similar to MySQL, some data might be lost. But different from MySQL, this solution can ensure the high availability within the same DC: if some nodes within the DC are down, the online business won’t be impacted and no manual efforts are needed because the cluster will automatically re-elect leaders to provide services. -![2-DC as a Mutual Backup Deployment](../media/deploy-backup.png) +![2-DC as a Mutual Backup Deployment](/media/deploy-backup.png) Some of our production users also adopt the 2-DC multi-active solution, which means: From 7768a7f10ee476c1afe478db9ad5ad42612510ad Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 10:18:40 -0600 Subject: [PATCH 03/17] fix link --- dev/how-to/deploy/tispark.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/how-to/deploy/tispark.md b/dev/how-to/deploy/tispark.md index 07e9ae84e9df3..6b1acbc39b181 100644 --- a/dev/how-to/deploy/tispark.md +++ b/dev/how-to/deploy/tispark.md @@ -7,7 +7,7 @@ aliases: ['/docs/tispark/tispark-quick-start-guide/'] # TiSpark Quick Start Guide -To make it easy to [try TiSpark](../tispark/tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. +To make it easy to [try TiSpark](/tispark/tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. ## Deployment information From 4829d58454af482ba88ea0f576d46c3fcd31fe9b Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 10:24:45 -0600 Subject: [PATCH 04/17] Update production-environment.md --- dev/how-to/deploy/from-tarball/production-environment.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/how-to/deploy/from-tarball/production-environment.md b/dev/how-to/deploy/from-tarball/production-environment.md index abc36664cf19b..b74a3e5cd0a5e 100755 --- a/dev/how-to/deploy/from-tarball/production-environment.md +++ b/dev/how-to/deploy/from-tarball/production-environment.md @@ -9,7 +9,7 @@ aliases: ['/docs/op-guide/binary-deployment/'] This guide provides installation instructions from a binary tarball on Linux. A complete TiDB cluster contains PD, TiKV, and TiDB. To start the database service, follow the order of PD -> TiKV -> TiDB. To stop the database service, follow the order of stopping TiDB -> TiKV -> PD. -See also [local deployment](../op-guide/binary-local-deployment.md) and [testing enviroment](../op-guide/binary-testing-deployment.md) deployment. +See also [local deployment](/op-guide/binary-local-deployment.md) and [testing enviroment](/op-guide/binary-testing-deployment.md) deployment. ## Prepare From 018d6798e2ffe3d7f5f63779e47395863da8e78a Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 10:25:04 -0600 Subject: [PATCH 05/17] Update testing-environment.md --- dev/how-to/deploy/from-tarball/testing-environment.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/how-to/deploy/from-tarball/testing-environment.md b/dev/how-to/deploy/from-tarball/testing-environment.md index bfdd2fc399bb3..aa1c1b1215fbd 100644 --- a/dev/how-to/deploy/from-tarball/testing-environment.md +++ b/dev/how-to/deploy/from-tarball/testing-environment.md @@ -9,7 +9,7 @@ aliases: ['/docs/op-guide/binary-testing-deployment/'] This guide provides installation instructions for all TiDB components across multiple nodes for testing purposes. It does not match the recommended usage for production systems. -See also [local deployment](../op-guide/binary-local-deployment.md) and [production enviroment](../op-guide/binary-deployment.md) deployment. +See also [local deployment](/op-guide/binary-local-deployment.md) and [production enviroment](/op-guide/binary-deployment.md) deployment. ## Prepare From fa228b8af0c3cb1e1776e323f3d18568648058b5 Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 10:25:30 -0600 Subject: [PATCH 06/17] Update location-awareness.md --- dev/how-to/deploy/geographic-redundancy/location-awareness.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/how-to/deploy/geographic-redundancy/location-awareness.md b/dev/how-to/deploy/geographic-redundancy/location-awareness.md index 4508162ccf5e4..3febe1a691420 100644 --- a/dev/how-to/deploy/geographic-redundancy/location-awareness.md +++ b/dev/how-to/deploy/geographic-redundancy/location-awareness.md @@ -11,7 +11,7 @@ aliases: ['/docs/op-guide/location-awareness/'] PD schedules according to the topology of the TiKV cluster to maximize the TiKV's capability for disaster recovery. -Before you begin, see [Deploy TiDB Using Ansible (Recommended)](../op-guide/ansible-deployment.md) and [Deploy TiDB Using Docker](../op-guide/docker-deployment.md). +Before you begin, see [Deploy TiDB Using Ansible (Recommended)](/op-guide/ansible-deployment.md) and [Deploy TiDB Using Docker](/op-guide/docker-deployment.md). ## TiKV reports the topological information From 145c3539a118b11a9d05f875220ac4f3e5f9a1a3 Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 10:27:57 -0600 Subject: [PATCH 07/17] Update ansible.md --- dev/how-to/deploy/orchestrated/ansible.md | 26 +++++++++++------------ 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/dev/how-to/deploy/orchestrated/ansible.md b/dev/how-to/deploy/orchestrated/ansible.md index 912cab27cef25..ea239648c4323 100644 --- a/dev/how-to/deploy/orchestrated/ansible.md +++ b/dev/how-to/deploy/orchestrated/ansible.md @@ -19,14 +19,14 @@ You can use the TiDB-Ansible configuration file to set up the cluster topology a - Initialize operating system parameters - Deploy the whole TiDB cluster -- [Start the TiDB cluster](../op-guide/ansible-operation.md#start-a-cluster) -- [Stop the TiDB cluster](../op-guide/ansible-operation.md#stop-a-cluster) -- [Modify component configuration](../op-guide/ansible-deployment-rolling-update.md#modify-component-configuration) -- [Scale the TiDB cluster](../op-guide/ansible-deployment-scale.md) -- [Upgrade the component version](../op-guide/ansible-deployment-rolling-update.md#upgrade-the-component-version) -- [Enable the cluster binlog](../tools/tidb-binlog-cluster.md) -- [Clean up data of the TiDB cluster](../op-guide/ansible-operation.md#clean-up-cluster-data) -- [Destroy the TiDB cluster](../op-guide/ansible-operation.md#destroy-a-cluster) +- [Start the TiDB cluster](/op-guide/ansible-operation.md#start-a-cluster) +- [Stop the TiDB cluster](/op-guide/ansible-operation.md#stop-a-cluster) +- [Modify component configuration](/op-guide/ansible-deployment-rolling-update.md#modify-component-configuration) +- [Scale the TiDB cluster](/op-guide/ansible-deployment-scale.md) +- [Upgrade the component version](/op-guide/ansible-deployment-rolling-update.md#upgrade-the-component-version) +- [Enable the cluster binlog](/tools/tidb-binlog-cluster.md) +- [Clean up data of the TiDB cluster](/op-guide/ansible-operation.md#clean-up-cluster-data) +- [Destroy the TiDB cluster](/op-guide/ansible-operation.md#destroy-a-cluster) ## Prepare @@ -36,12 +36,12 @@ Before you start, make sure you have: - 4 or more machines - A standard TiDB cluster contains 6 machines. You can use 4 machines for testing. For more details, see [Software and Hardware Requirements](../op-guide/recommendation.md). + A standard TiDB cluster contains 6 machines. You can use 4 machines for testing. For more details, see [Software and Hardware Requirements](/op-guide/recommendation.md). - CentOS 7.3 (64 bit) or later, x86_64 architecture (AMD64) - Network between machines - > **Note:** When you deploy TiDB using Ansible, **use SSD disks for the data directory of TiKV and PD nodes**. Otherwise, it cannot pass the check. If you only want to try TiDB out and explore the features, it is recommended to [deploy TiDB using Docker Compose](../op-guide/docker-compose.md) on a single machine. + > **Note:** When you deploy TiDB using Ansible, **use SSD disks for the data directory of TiKV and PD nodes**. Otherwise, it cannot pass the check. If you only want to try TiDB out and explore the features, it is recommended to [deploy TiDB using Docker Compose](/op-guide/docker-compose.md) on a single machine. 2. A Control Machine that meets the following requirements: @@ -359,7 +359,7 @@ You can choose one of the following two types of cluster topology according to y - [The cluster topology of a single TiKV instance on each TiKV node](#option-1-use-the-cluster-topology-of-a-single-tikv-instance-on-each-tikv-node) - In most cases, it is recommended to deploy one TiKV instance on each TiKV node for better performance. However, if the CPU and memory of your TiKV machines are much better than the required in [Hardware and Software Requirements](../op-guide/recommendation.md), and you have more than two disks in one node or the capacity of one SSD is larger than 2 TB, you can deploy no more than 2 TiKV instances on a single TiKV node. + In most cases, it is recommended to deploy one TiKV instance on each TiKV node for better performance. However, if the CPU and memory of your TiKV machines are much better than the required in [Hardware and Software Requirements](/op-guide/recommendation.md), and you have more than two disks in one node or the capacity of one SSD is larger than 2 TB, you can deploy no more than 2 TiKV instances on a single TiKV node. - [The cluster topology of multiple TiKV instances on each TiKV node](#option-2-use-the-cluster-topology-of-multiple-tikv-instances-on-each-tikv-node) @@ -521,8 +521,8 @@ To enable the following control variables, use the capitalized `True`. To disabl | cluster_name | the name of a cluster, adjustable | | tidb_version | the version of TiDB, configured by default in TiDB-Ansible branches | | process_supervision | the supervision way of processes, systemd by default, supervise optional | -| timezone | the global default time zone configured when a new TiDB cluster bootstrap is initialized; you can edit it later using the global `time_zone` system variable and the session `time_zone` system variable as described in [Time Zone Support](../sql/time-zone.md); the default value is `Asia/Shanghai` and see [the list of time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) for more optional values | -| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](../op-guide/recommendation.md#network-requirements) to the white list | +| timezone | the global default time zone configured when a new TiDB cluster bootstrap is initialized; you can edit it later using the global `time_zone` system variable and the session `time_zone` system variable as described in [Time Zone Support](/sql/time-zone.md); the default value is `Asia/Shanghai` and see [the list of time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) for more optional values | +| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](/op-guide/recommendation.md#network-requirements) to the white list | | enable_ntpd | to monitor the NTP service of the managed node, True by default; do not close it | | set_hostname | to edit the hostname of the managed node based on the IP, False by default | | enable_binlog | whether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the `zookeeper_addrs` variable | From 54ac95918fe52f9c8bb187aabec347be004ef686 Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 10:29:28 -0600 Subject: [PATCH 08/17] Update docker.md --- dev/how-to/deploy/orchestrated/docker.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/how-to/deploy/orchestrated/docker.md b/dev/how-to/deploy/orchestrated/docker.md index 6cf168695b307..e3ae09519b636 100644 --- a/dev/how-to/deploy/orchestrated/docker.md +++ b/dev/how-to/deploy/orchestrated/docker.md @@ -9,7 +9,7 @@ aliases: ['/docs/op-guide/docker-deployment/'] This page shows you how to manually deploy a multi-node TiDB cluster on multiple machines using Docker. -To learn more, see [TiDB architecture](../overview.md#tidb-architecture) and [Software and Hardware Requirements](../op-guide/recommendation.md). +To learn more, see [TiDB architecture](/architecture.md) and [Software and Hardware Requirements](/dev/how-to/deploy/hardware-recommendations.md). ## Preparation From 67e02523450bca16899e7a275cb2c9d14cfe882f Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 10:56:26 -0600 Subject: [PATCH 09/17] Update production-environment.md --- dev/how-to/deploy/from-tarball/production-environment.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/dev/how-to/deploy/from-tarball/production-environment.md b/dev/how-to/deploy/from-tarball/production-environment.md index b74a3e5cd0a5e..b7b36ad9018af 100755 --- a/dev/how-to/deploy/from-tarball/production-environment.md +++ b/dev/how-to/deploy/from-tarball/production-environment.md @@ -9,11 +9,11 @@ aliases: ['/docs/op-guide/binary-deployment/'] This guide provides installation instructions from a binary tarball on Linux. A complete TiDB cluster contains PD, TiKV, and TiDB. To start the database service, follow the order of PD -> TiKV -> TiDB. To stop the database service, follow the order of stopping TiDB -> TiKV -> PD. -See also [local deployment](/op-guide/binary-local-deployment.md) and [testing enviroment](/op-guide/binary-testing-deployment.md) deployment. +See also [local deployment](/op-guide/binary-local-deployment.md) and [testing environment](/dev/how-to/deploy/from-tarball/testing-environment.md) deployment. ## Prepare -Before you start, see [TiDB architecture](/overview.md#tidb-architecture) and [Software and Hardware Requirements](/op-guide/recommendation.md). Make sure the following requirements are satisfied: +Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). Make sure the following requirements are satisfied: ### Operating system @@ -21,7 +21,7 @@ For the operating system, it is recommended to use RHEL/CentOS 7.3 or higher. Th | Configuration | Description | | :-- | :-------------------- | -| Supported Platform | RHEL/CentOS 7.3+ ([more details](/op-guide/recommendation.md)) | +| Supported Platform | RHEL/CentOS 7.3+ ([more details](/dev/how-to/deploy/hardware-recommendations.md)) | | File System | ext4 is recommended | | Swap Space | Should be disabled | | Disk Block Size | Set the system disk `Block` size to `4096` | @@ -120,7 +120,7 @@ $ cd tidb-latest-linux-amd64 ## Multiple nodes cluster deployment -For the production environment, multiple nodes cluster deployment is recommended. Before you begin, see [Software and Hardware Requirements](/op-guide/recommendation.md). +For the production environment, multiple nodes cluster deployment is recommended. Before you begin, see [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). Assuming that you have six nodes, you can deploy 3 PD instances, 3 TiKV instances, and 1 TiDB instance. See the following table for details: From efddb26fa0370f2565d81fbc3f7da123206f4c2d Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 10:58:29 -0600 Subject: [PATCH 10/17] Update testing-environment.md --- dev/how-to/deploy/from-tarball/testing-environment.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/dev/how-to/deploy/from-tarball/testing-environment.md b/dev/how-to/deploy/from-tarball/testing-environment.md index aa1c1b1215fbd..002bf00443ca4 100644 --- a/dev/how-to/deploy/from-tarball/testing-environment.md +++ b/dev/how-to/deploy/from-tarball/testing-environment.md @@ -9,11 +9,11 @@ aliases: ['/docs/op-guide/binary-testing-deployment/'] This guide provides installation instructions for all TiDB components across multiple nodes for testing purposes. It does not match the recommended usage for production systems. -See also [local deployment](/op-guide/binary-local-deployment.md) and [production enviroment](/op-guide/binary-deployment.md) deployment. +See also [local deployment](/op-guide/binary-local-deployment.md) and [production environment](production-environment.md) deployment. ## Prepare -Before you start, see [TiDB architecture](/overview.md#tidb-architecture) and [Software and Hardware Requirements](/op-guide/recommendation.md). Make sure the following requirements are satisfied: +Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). Make sure the following requirements are satisfied: ### Operating system @@ -21,7 +21,7 @@ For the operating system, it is recommended to use RHEL/CentOS 7.3 or higher. Th | Configuration | Description | | :-- | :-------------------- | -| Supported Platform | RHEL/CentOS 7.3+ ([more details](/op-guide/recommendation.md)) | +| Supported Platform | RHEL/CentOS 7.3+ ([more details](/dev/how-to/deploy/hardware-recommendations.md)) | | File System | ext4 is recommended | | Swap Space | Should be disabled | | Disk Block Size | Set the system disk `Block` size to `4096` | From 0486b6f7fae1c292e41236aa7eb5b4d7ef87bd07 Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 11:00:38 -0600 Subject: [PATCH 11/17] Update location-awareness.md --- dev/how-to/deploy/geographic-redundancy/location-awareness.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/how-to/deploy/geographic-redundancy/location-awareness.md b/dev/how-to/deploy/geographic-redundancy/location-awareness.md index 3febe1a691420..203c0867dbe25 100644 --- a/dev/how-to/deploy/geographic-redundancy/location-awareness.md +++ b/dev/how-to/deploy/geographic-redundancy/location-awareness.md @@ -11,7 +11,7 @@ aliases: ['/docs/op-guide/location-awareness/'] PD schedules according to the topology of the TiKV cluster to maximize the TiKV's capability for disaster recovery. -Before you begin, see [Deploy TiDB Using Ansible (Recommended)](/op-guide/ansible-deployment.md) and [Deploy TiDB Using Docker](/op-guide/docker-deployment.md). +Before you begin, see [Deploy TiDB Using Ansible (Recommended)](../orchestrated/ansible.md) and [Deploy TiDB Using Docker](../orchestrated/docker.md). ## TiKV reports the topological information From d2d25634095badc1961d7a45e98999ae6473e189 Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 11:03:58 -0600 Subject: [PATCH 12/17] Update ansible.md --- dev/how-to/deploy/orchestrated/ansible.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/dev/how-to/deploy/orchestrated/ansible.md b/dev/how-to/deploy/orchestrated/ansible.md index ea239648c4323..6d40e1db39c2b 100644 --- a/dev/how-to/deploy/orchestrated/ansible.md +++ b/dev/how-to/deploy/orchestrated/ansible.md @@ -19,14 +19,14 @@ You can use the TiDB-Ansible configuration file to set up the cluster topology a - Initialize operating system parameters - Deploy the whole TiDB cluster -- [Start the TiDB cluster](/op-guide/ansible-operation.md#start-a-cluster) -- [Stop the TiDB cluster](/op-guide/ansible-operation.md#stop-a-cluster) +- [Start the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#start-a-cluster) +- [Stop the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#stop-a-cluster) - [Modify component configuration](/op-guide/ansible-deployment-rolling-update.md#modify-component-configuration) - [Scale the TiDB cluster](/op-guide/ansible-deployment-scale.md) - [Upgrade the component version](/op-guide/ansible-deployment-rolling-update.md#upgrade-the-component-version) - [Enable the cluster binlog](/tools/tidb-binlog-cluster.md) -- [Clean up data of the TiDB cluster](/op-guide/ansible-operation.md#clean-up-cluster-data) -- [Destroy the TiDB cluster](/op-guide/ansible-operation.md#destroy-a-cluster) +- [Clean up data of the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#clean-up-cluster-data) +- [Destroy the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#destroy-a-cluster) ## Prepare From d732215fb35a10f0dfbbbe1d56c99af17d3f964c Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 11:06:24 -0600 Subject: [PATCH 13/17] Update ansible.md --- dev/how-to/deploy/orchestrated/ansible.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/how-to/deploy/orchestrated/ansible.md b/dev/how-to/deploy/orchestrated/ansible.md index 6d40e1db39c2b..b39a5a2975fbb 100644 --- a/dev/how-to/deploy/orchestrated/ansible.md +++ b/dev/how-to/deploy/orchestrated/ansible.md @@ -36,7 +36,7 @@ Before you start, make sure you have: - 4 or more machines - A standard TiDB cluster contains 6 machines. You can use 4 machines for testing. For more details, see [Software and Hardware Requirements](/op-guide/recommendation.md). + A standard TiDB cluster contains 6 machines. You can use 4 machines for testing. For more details, see [Software and Hardware Recommendations](../hardware-recommendations.md). - CentOS 7.3 (64 bit) or later, x86_64 architecture (AMD64) - Network between machines From ea423542445a60706e9561480df0eca0d7cab4f1 Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 11:10:46 -0600 Subject: [PATCH 14/17] Update offline-ansible.md --- .../deploy/orchestrated/offline-ansible.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/dev/how-to/deploy/orchestrated/offline-ansible.md b/dev/how-to/deploy/orchestrated/offline-ansible.md index 4739a0607acd4..cd4d733fcd945 100644 --- a/dev/how-to/deploy/orchestrated/offline-ansible.md +++ b/dev/how-to/deploy/orchestrated/offline-ansible.md @@ -20,7 +20,7 @@ Before you start, make sure that you have: 2. Several target machines and one Control Machine - - For system requirements and configuration, see [Prepare the environment](../op-guide/ansible-deployment.md#prepare). + - For system requirements and configuration, see [Prepare the environment](ansible.md#prepare). - It is acceptable without access to the Internet. ## Step 1: Install system dependencies on the Control Machine @@ -49,7 +49,7 @@ Take the following steps to install system dependencies on the Control Machine i ## Step 2: Create the `tidb` user on the Control Machine and generate the SSH key -See [Create the `tidb` user on the Control Machine and generate the SSH key](../op-guide/ansible-deployment.md#step-2-create-the-tidb-user-on-the-control-machine-and-generate-the-ssh-key). +See [Create the `tidb` user on the Control Machine and generate the SSH key](ansible.md#step-2-create-the-tidb-user-on-the-control-machine-and-generate-the-ssh-key). ## Step 3: Install Ansible and its dependencies offline on the Control Machine @@ -129,25 +129,25 @@ The relationship between the `tidb-ansible` version and the TiDB version is as f ## Step 5: Configure the SSH mutual trust and sudo rules on the Control Machine -See [Configure the SSH mutual trust and sudo rules on the Control Machine](../op-guide/ansible-deployment.md#step-5-configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine). +See [Configure the SSH mutual trust and sudo rules on the Control Machine](ansible.md#step-5-configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine). ## Step 6: Install the NTP service on the target machines -See [Install the NTP service on the target machines](../op-guide/ansible-deployment.md#step-6-install-the-ntp-service-on-the-target-machines). +See [Install the NTP service on the target machines](ansible.md#step-6-install-the-ntp-service-on-the-target-machines). > **Note:** If the time and time zone of all your target machines are same, the NTP service is on and is normally synchronizing time, you can ignore this step. See [How to check whether the NTP service is normal](#how-to-check-whether-the-ntp-service-is-normal). ## Step 7: Configure the CPUfreq governor mode on the target machine -See [Configure the CPUfreq governor mode on the target machine](../op-guide/ansible-deployment.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine). +See [Configure the CPUfreq governor mode on the target machine](ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine). ## Step 8: Mount the data disk ext4 filesystem with options on the target machines -See [Mount the data disk ext4 filesystem with options on the target machines](../op-guide/ansible-deployment.md#step-8-mount-the-data-disk-ext4-filesystem-with-options-on-the-target-machines). +See [Mount the data disk ext4 filesystem with options on the target machines](ansible.md#step-8-mount-the-data-disk-ext4-filesystem-with-options-on-the-target-machines). ## Step 9: Edit the `inventory.ini` file to orchestrate the TiDB cluster -See [Edit the `inventory.ini` file to orchestrate the TiDB cluster](../op-guide/ansible-deployment.md#step-9-edit-the-inventory-ini-file-to-orchestrate-the-tidb-cluster). +See [Edit the `inventory.ini` file to orchestrate the TiDB cluster](ansible.md#step-9-edit-the-inventory-ini-file-to-orchestrate-the-tidb-cluster). ## Step 10: Deploy the TiDB cluster @@ -162,8 +162,8 @@ See [Edit the `inventory.ini` file to orchestrate the TiDB cluster](../op-guide/ $ ./install_grafana_font_rpms.sh ``` -3. See [Deploy the TiDB cluster](../op-guide/ansible-deployment.md#step-11-deploy-the-tidb-cluster). +3. See [Deploy the TiDB cluster](ansible.md#step-11-deploy-the-tidb-cluster). ## Test the TiDB cluster -See [Test the TiDB cluster](../op-guide/ansible-deployment.md#test-the-tidb-cluster). +See [Test the TiDB cluster](ansible.md#test-the-tidb-cluster). From fd473dc26f7eb331ef0757cc04d8fddf0b9db001 Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 12:12:48 -0600 Subject: [PATCH 15/17] Fix links - make sure they are all relative Assists in version split. --- .../deploy/from-tarball/production-environment.md | 8 ++++---- dev/how-to/deploy/from-tarball/testing-environment.md | 4 ++-- dev/how-to/deploy/orchestrated/ansible.md | 10 +++++----- dev/how-to/deploy/orchestrated/docker.md | 2 +- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/dev/how-to/deploy/from-tarball/production-environment.md b/dev/how-to/deploy/from-tarball/production-environment.md index b7b36ad9018af..02182d6d005e5 100755 --- a/dev/how-to/deploy/from-tarball/production-environment.md +++ b/dev/how-to/deploy/from-tarball/production-environment.md @@ -9,11 +9,11 @@ aliases: ['/docs/op-guide/binary-deployment/'] This guide provides installation instructions from a binary tarball on Linux. A complete TiDB cluster contains PD, TiKV, and TiDB. To start the database service, follow the order of PD -> TiKV -> TiDB. To stop the database service, follow the order of stopping TiDB -> TiKV -> PD. -See also [local deployment](/op-guide/binary-local-deployment.md) and [testing environment](/dev/how-to/deploy/from-tarball/testing-environment.md) deployment. +See also [local deployment](/op-guide/binary-local-deployment.md) and [testing environment](testing-environment.md) deployment. ## Prepare -Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). Make sure the following requirements are satisfied: +Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](../hardware-recommendations.md). Make sure the following requirements are satisfied: ### Operating system @@ -21,7 +21,7 @@ For the operating system, it is recommended to use RHEL/CentOS 7.3 or higher. Th | Configuration | Description | | :-- | :-------------------- | -| Supported Platform | RHEL/CentOS 7.3+ ([more details](/dev/how-to/deploy/hardware-recommendations.md)) | +| Supported Platform | RHEL/CentOS 7.3+ ([more details](../hardware-recommendations.md)) | | File System | ext4 is recommended | | Swap Space | Should be disabled | | Disk Block Size | Set the system disk `Block` size to `4096` | @@ -120,7 +120,7 @@ $ cd tidb-latest-linux-amd64 ## Multiple nodes cluster deployment -For the production environment, multiple nodes cluster deployment is recommended. Before you begin, see [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). +For the production environment, multiple nodes cluster deployment is recommended. Before you begin, see [Software and Hardware Recommendations](../hardware-recommendations.md). Assuming that you have six nodes, you can deploy 3 PD instances, 3 TiKV instances, and 1 TiDB instance. See the following table for details: diff --git a/dev/how-to/deploy/from-tarball/testing-environment.md b/dev/how-to/deploy/from-tarball/testing-environment.md index 002bf00443ca4..4e129a03be037 100644 --- a/dev/how-to/deploy/from-tarball/testing-environment.md +++ b/dev/how-to/deploy/from-tarball/testing-environment.md @@ -13,7 +13,7 @@ See also [local deployment](/op-guide/binary-local-deployment.md) and [productio ## Prepare -Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). Make sure the following requirements are satisfied: +Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](../hardware-recommendations.md). Make sure the following requirements are satisfied: ### Operating system @@ -21,7 +21,7 @@ For the operating system, it is recommended to use RHEL/CentOS 7.3 or higher. Th | Configuration | Description | | :-- | :-------------------- | -| Supported Platform | RHEL/CentOS 7.3+ ([more details](/dev/how-to/deploy/hardware-recommendations.md)) | +| Supported Platform | RHEL/CentOS 7.3+ ([more details](../hardware-recommendations.md)) | | File System | ext4 is recommended | | Swap Space | Should be disabled | | Disk Block Size | Set the system disk `Block` size to `4096` | diff --git a/dev/how-to/deploy/orchestrated/ansible.md b/dev/how-to/deploy/orchestrated/ansible.md index b39a5a2975fbb..d9eab86b131d5 100644 --- a/dev/how-to/deploy/orchestrated/ansible.md +++ b/dev/how-to/deploy/orchestrated/ansible.md @@ -19,14 +19,14 @@ You can use the TiDB-Ansible configuration file to set up the cluster topology a - Initialize operating system parameters - Deploy the whole TiDB cluster -- [Start the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#start-a-cluster) -- [Stop the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#stop-a-cluster) +- [Start the TiDB cluster](ansible-operations.md#start-a-cluster) +- [Stop the TiDB cluster](ansible-operations.md#stop-a-cluster) - [Modify component configuration](/op-guide/ansible-deployment-rolling-update.md#modify-component-configuration) - [Scale the TiDB cluster](/op-guide/ansible-deployment-scale.md) - [Upgrade the component version](/op-guide/ansible-deployment-rolling-update.md#upgrade-the-component-version) - [Enable the cluster binlog](/tools/tidb-binlog-cluster.md) -- [Clean up data of the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#clean-up-cluster-data) -- [Destroy the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#destroy-a-cluster) +- [Clean up data of the TiDB cluster](ansible-operations.md#clean-up-cluster-data) +- [Destroy the TiDB cluster](ansible-operations.md#destroy-a-cluster) ## Prepare @@ -522,7 +522,7 @@ To enable the following control variables, use the capitalized `True`. To disabl | tidb_version | the version of TiDB, configured by default in TiDB-Ansible branches | | process_supervision | the supervision way of processes, systemd by default, supervise optional | | timezone | the global default time zone configured when a new TiDB cluster bootstrap is initialized; you can edit it later using the global `time_zone` system variable and the session `time_zone` system variable as described in [Time Zone Support](/sql/time-zone.md); the default value is `Asia/Shanghai` and see [the list of time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) for more optional values | -| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](/op-guide/recommendation.md#network-requirements) to the white list | +| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](../hardware-recommendations.md#network-requirements) to the white list | | enable_ntpd | to monitor the NTP service of the managed node, True by default; do not close it | | set_hostname | to edit the hostname of the managed node based on the IP, False by default | | enable_binlog | whether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the `zookeeper_addrs` variable | diff --git a/dev/how-to/deploy/orchestrated/docker.md b/dev/how-to/deploy/orchestrated/docker.md index e3ae09519b636..c0b3b9b4df7b4 100644 --- a/dev/how-to/deploy/orchestrated/docker.md +++ b/dev/how-to/deploy/orchestrated/docker.md @@ -9,7 +9,7 @@ aliases: ['/docs/op-guide/docker-deployment/'] This page shows you how to manually deploy a multi-node TiDB cluster on multiple machines using Docker. -To learn more, see [TiDB architecture](/architecture.md) and [Software and Hardware Requirements](/dev/how-to/deploy/hardware-recommendations.md). +To learn more, see [TiDB architecture](/architecture.md) and [Software and Hardware Requirements](../hardware-recommendations.md). ## Preparation From 593e346ebaece95ee61a48d2ebd88158cafe1ef6 Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Mon, 15 Apr 2019 13:22:42 -0600 Subject: [PATCH 16/17] s/requirements/recommendations/ --- dev/how-to/deploy/orchestrated/docker.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/how-to/deploy/orchestrated/docker.md b/dev/how-to/deploy/orchestrated/docker.md index c0b3b9b4df7b4..7ab0f7ceeef2f 100644 --- a/dev/how-to/deploy/orchestrated/docker.md +++ b/dev/how-to/deploy/orchestrated/docker.md @@ -9,7 +9,7 @@ aliases: ['/docs/op-guide/docker-deployment/'] This page shows you how to manually deploy a multi-node TiDB cluster on multiple machines using Docker. -To learn more, see [TiDB architecture](/architecture.md) and [Software and Hardware Requirements](../hardware-recommendations.md). +To learn more, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](../hardware-recommendations.md). ## Preparation From 3939e0a1dbe2c1301a0c736b8a186bb320a0571f Mon Sep 17 00:00:00 2001 From: Morgan Tocker Date: Tue, 23 Apr 2019 13:52:45 -0600 Subject: [PATCH 17/17] Update URLs to use absolute links. --- .../from-tarball/production-environment.md | 8 ++++---- .../deploy/from-tarball/testing-environment.md | 6 +++--- .../location-awareness.md | 2 +- dev/how-to/deploy/orchestrated/ansible.md | 10 +++++----- dev/how-to/deploy/orchestrated/docker.md | 2 +- .../deploy/orchestrated/offline-ansible.md | 18 +++++++++--------- 6 files changed, 23 insertions(+), 23 deletions(-) mode change 100755 => 100644 dev/how-to/deploy/from-tarball/production-environment.md diff --git a/dev/how-to/deploy/from-tarball/production-environment.md b/dev/how-to/deploy/from-tarball/production-environment.md old mode 100755 new mode 100644 index 02182d6d005e5..4f7283bf4a200 --- a/dev/how-to/deploy/from-tarball/production-environment.md +++ b/dev/how-to/deploy/from-tarball/production-environment.md @@ -9,11 +9,11 @@ aliases: ['/docs/op-guide/binary-deployment/'] This guide provides installation instructions from a binary tarball on Linux. A complete TiDB cluster contains PD, TiKV, and TiDB. To start the database service, follow the order of PD -> TiKV -> TiDB. To stop the database service, follow the order of stopping TiDB -> TiKV -> PD. -See also [local deployment](/op-guide/binary-local-deployment.md) and [testing environment](testing-environment.md) deployment. +See also [local deployment](/dev/how-to/get-started/local-cluster/install-from-binary.md) and [testing environment](/dev/how-to/deploy/from-tarball/testing-environment.md) deployment. ## Prepare -Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](../hardware-recommendations.md). Make sure the following requirements are satisfied: +Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). Make sure the following requirements are satisfied: ### Operating system @@ -21,7 +21,7 @@ For the operating system, it is recommended to use RHEL/CentOS 7.3 or higher. Th | Configuration | Description | | :-- | :-------------------- | -| Supported Platform | RHEL/CentOS 7.3+ ([more details](../hardware-recommendations.md)) | +| Supported Platform | RHEL/CentOS 7.3+ ([more details](/dev/how-to/deploy/hardware-recommendations.md)) | | File System | ext4 is recommended | | Swap Space | Should be disabled | | Disk Block Size | Set the system disk `Block` size to `4096` | @@ -120,7 +120,7 @@ $ cd tidb-latest-linux-amd64 ## Multiple nodes cluster deployment -For the production environment, multiple nodes cluster deployment is recommended. Before you begin, see [Software and Hardware Recommendations](../hardware-recommendations.md). +For the production environment, multiple nodes cluster deployment is recommended. Before you begin, see [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). Assuming that you have six nodes, you can deploy 3 PD instances, 3 TiKV instances, and 1 TiDB instance. See the following table for details: diff --git a/dev/how-to/deploy/from-tarball/testing-environment.md b/dev/how-to/deploy/from-tarball/testing-environment.md index 4e129a03be037..410d1ccde2855 100644 --- a/dev/how-to/deploy/from-tarball/testing-environment.md +++ b/dev/how-to/deploy/from-tarball/testing-environment.md @@ -9,11 +9,11 @@ aliases: ['/docs/op-guide/binary-testing-deployment/'] This guide provides installation instructions for all TiDB components across multiple nodes for testing purposes. It does not match the recommended usage for production systems. -See also [local deployment](/op-guide/binary-local-deployment.md) and [production environment](production-environment.md) deployment. +See also [local deployment](/dev/how-to/get-started/local-cluster/install-from-binary.md) and [production environment](/dev/how-to/deploy/from-tarball/production-environment.md) deployment. ## Prepare -Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](../hardware-recommendations.md). Make sure the following requirements are satisfied: +Before you start, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). Make sure the following requirements are satisfied: ### Operating system @@ -21,7 +21,7 @@ For the operating system, it is recommended to use RHEL/CentOS 7.3 or higher. Th | Configuration | Description | | :-- | :-------------------- | -| Supported Platform | RHEL/CentOS 7.3+ ([more details](../hardware-recommendations.md)) | +| Supported Platform | RHEL/CentOS 7.3+ ([more details](/dev/how-to/deploy/hardware-recommendations.md)) | | File System | ext4 is recommended | | Swap Space | Should be disabled | | Disk Block Size | Set the system disk `Block` size to `4096` | diff --git a/dev/how-to/deploy/geographic-redundancy/location-awareness.md b/dev/how-to/deploy/geographic-redundancy/location-awareness.md index 203c0867dbe25..fc41c02f9cf03 100644 --- a/dev/how-to/deploy/geographic-redundancy/location-awareness.md +++ b/dev/how-to/deploy/geographic-redundancy/location-awareness.md @@ -11,7 +11,7 @@ aliases: ['/docs/op-guide/location-awareness/'] PD schedules according to the topology of the TiKV cluster to maximize the TiKV's capability for disaster recovery. -Before you begin, see [Deploy TiDB Using Ansible (Recommended)](../orchestrated/ansible.md) and [Deploy TiDB Using Docker](../orchestrated/docker.md). +Before you begin, see [Deploy TiDB Using Ansible (Recommended)](/dev/how-to/deploy/orchestrated/ansible.md) and [Deploy TiDB Using Docker](/dev/how-to/deploy/orchestrated/docker.md). ## TiKV reports the topological information diff --git a/dev/how-to/deploy/orchestrated/ansible.md b/dev/how-to/deploy/orchestrated/ansible.md index cf8f667d489bc..b89fabacde78c 100644 --- a/dev/how-to/deploy/orchestrated/ansible.md +++ b/dev/how-to/deploy/orchestrated/ansible.md @@ -19,14 +19,14 @@ You can use the TiDB-Ansible configuration file to set up the cluster topology a - Initialize operating system parameters - Deploy the whole TiDB cluster -- [Start the TiDB cluster](ansible-operations.md#start-a-cluster) -- [Stop the TiDB cluster](ansible-operations.md#stop-a-cluster) +- [Start the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#start-a-cluster) +- [Stop the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#stop-a-cluster) - [Modify component configuration](/op-guide/ansible-deployment-rolling-update.md#modify-component-configuration) - [Scale the TiDB cluster](/op-guide/ansible-deployment-scale.md) - [Upgrade the component version](/op-guide/ansible-deployment-rolling-update.md#upgrade-the-component-version) - [Enable the cluster binlog](/tools/tidb-binlog-cluster.md) -- [Clean up data of the TiDB cluster](ansible-operations.md#clean-up-cluster-data) -- [Destroy the TiDB cluster](ansible-operations.md#destroy-a-cluster) +- [Clean up data of the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#clean-up-cluster-data) +- [Destroy the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible-operations.md#destroy-a-cluster) ## Prepare @@ -522,7 +522,7 @@ To enable the following control variables, use the capitalized `True`. To disabl | tidb_version | the version of TiDB, configured by default in TiDB-Ansible branches | | process_supervision | the supervision way of processes, systemd by default, supervise optional | | timezone | the global default time zone configured when a new TiDB cluster bootstrap is initialized; you can edit it later using the global `time_zone` system variable and the session `time_zone` system variable as described in [Time Zone Support](/sql/time-zone.md); the default value is `Asia/Shanghai` and see [the list of time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) for more optional values | -| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](../hardware-recommendations.md#network-requirements) to the white list | +| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](/dev/how-to/deploy/hardware-recommendations.md#network-requirements) to the white list | | enable_ntpd | to monitor the NTP service of the managed node, True by default; do not close it | | set_hostname | to edit the hostname of the managed node based on the IP, False by default | | enable_binlog | whether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the `zookeeper_addrs` variable | diff --git a/dev/how-to/deploy/orchestrated/docker.md b/dev/how-to/deploy/orchestrated/docker.md index 7ab0f7ceeef2f..8a6bde0acce73 100644 --- a/dev/how-to/deploy/orchestrated/docker.md +++ b/dev/how-to/deploy/orchestrated/docker.md @@ -9,7 +9,7 @@ aliases: ['/docs/op-guide/docker-deployment/'] This page shows you how to manually deploy a multi-node TiDB cluster on multiple machines using Docker. -To learn more, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](../hardware-recommendations.md). +To learn more, see [TiDB architecture](/architecture.md) and [Software and Hardware Recommendations](/dev/how-to/deploy/hardware-recommendations.md). ## Preparation diff --git a/dev/how-to/deploy/orchestrated/offline-ansible.md b/dev/how-to/deploy/orchestrated/offline-ansible.md index cd4d733fcd945..aa05f7c139cc8 100644 --- a/dev/how-to/deploy/orchestrated/offline-ansible.md +++ b/dev/how-to/deploy/orchestrated/offline-ansible.md @@ -20,7 +20,7 @@ Before you start, make sure that you have: 2. Several target machines and one Control Machine - - For system requirements and configuration, see [Prepare the environment](ansible.md#prepare). + - For system requirements and configuration, see [Prepare the environment](/dev/how-to/deploy/orchestrated/ansible.md#prepare). - It is acceptable without access to the Internet. ## Step 1: Install system dependencies on the Control Machine @@ -49,7 +49,7 @@ Take the following steps to install system dependencies on the Control Machine i ## Step 2: Create the `tidb` user on the Control Machine and generate the SSH key -See [Create the `tidb` user on the Control Machine and generate the SSH key](ansible.md#step-2-create-the-tidb-user-on-the-control-machine-and-generate-the-ssh-key). +See [Create the `tidb` user on the Control Machine and generate the SSH key](/dev/how-to/deploy/orchestrated/ansible.md#step-2-create-the-tidb-user-on-the-control-machine-and-generate-the-ssh-key). ## Step 3: Install Ansible and its dependencies offline on the Control Machine @@ -129,25 +129,25 @@ The relationship between the `tidb-ansible` version and the TiDB version is as f ## Step 5: Configure the SSH mutual trust and sudo rules on the Control Machine -See [Configure the SSH mutual trust and sudo rules on the Control Machine](ansible.md#step-5-configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine). +See [Configure the SSH mutual trust and sudo rules on the Control Machine](/dev/how-to/deploy/orchestrated/ansible.md#step-5-configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine). ## Step 6: Install the NTP service on the target machines -See [Install the NTP service on the target machines](ansible.md#step-6-install-the-ntp-service-on-the-target-machines). +See [Install the NTP service on the target machines](/dev/how-to/deploy/orchestrated/ansible.md#step-6-install-the-ntp-service-on-the-target-machines). > **Note:** If the time and time zone of all your target machines are same, the NTP service is on and is normally synchronizing time, you can ignore this step. See [How to check whether the NTP service is normal](#how-to-check-whether-the-ntp-service-is-normal). ## Step 7: Configure the CPUfreq governor mode on the target machine -See [Configure the CPUfreq governor mode on the target machine](ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine). +See [Configure the CPUfreq governor mode on the target machine](/dev/how-to/deploy/orchestrated/ansible.md#step-7-configure-the-cpufreq-governor-mode-on-the-target-machine). ## Step 8: Mount the data disk ext4 filesystem with options on the target machines -See [Mount the data disk ext4 filesystem with options on the target machines](ansible.md#step-8-mount-the-data-disk-ext4-filesystem-with-options-on-the-target-machines). +See [Mount the data disk ext4 filesystem with options on the target machines](/dev/how-to/deploy/orchestrated/ansible.md#step-8-mount-the-data-disk-ext4-filesystem-with-options-on-the-target-machines). ## Step 9: Edit the `inventory.ini` file to orchestrate the TiDB cluster -See [Edit the `inventory.ini` file to orchestrate the TiDB cluster](ansible.md#step-9-edit-the-inventory-ini-file-to-orchestrate-the-tidb-cluster). +See [Edit the `inventory.ini` file to orchestrate the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible.md#step-9-edit-the-inventory-ini-file-to-orchestrate-the-tidb-cluster). ## Step 10: Deploy the TiDB cluster @@ -162,8 +162,8 @@ See [Edit the `inventory.ini` file to orchestrate the TiDB cluster](ansible.md#s $ ./install_grafana_font_rpms.sh ``` -3. See [Deploy the TiDB cluster](ansible.md#step-11-deploy-the-tidb-cluster). +3. See [Deploy the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible.md#step-11-deploy-the-tidb-cluster). ## Test the TiDB cluster -See [Test the TiDB cluster](ansible.md#test-the-tidb-cluster). +See [Test the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible.md#test-the-tidb-cluster).