diff --git a/pd-control.md b/pd-control.md index 07fc485c24906..eef7c60114708 100644 --- a/pd-control.md +++ b/pd-control.md @@ -172,7 +172,7 @@ Usage: } >> config show cluster-version // Display the current version of the cluster, which is the current minimum version of TiKV nodes in the cluster and does not correspond to the binary version. -"5.2.2" +"8.5.1" ``` - `max-snapshot-count` controls the maximum number of snapshots that a single store receives or sends out at the same time. The scheduler is restricted by this configuration to avoid taking up normal application resources. When you need to improve the speed of adding replicas or balancing, increase this value. @@ -227,8 +227,6 @@ Usage: - `region-score-formula-version` controls the version of the Region score formula. The value options are `v1` and `v2`. The version 2 of the formula helps to reduce redundant balance Region scheduling in some scenarios, such as taking TiKV nodes online or offline. - {{< copyable "" >}} - ```bash config set region-score-formula-version v2 ``` @@ -255,8 +253,6 @@ Usage: The following command specifies that the maximum waiting time for the store to go online is 4 hours. - {{< copyable "" >}} - ```bash config set max-store-preparing-time 4h ``` @@ -314,7 +310,7 @@ Usage: - `cluster-version` is the version of the cluster, which is used to enable or disable some features and to deal with the compatibility issues. By default, it is the minimum version of all normally running TiKV nodes in the cluster. You can set it manually only when you need to roll it back to an earlier version. ```bash - config set cluster-version 1.0.8 // Set the version of the cluster to 1.0.8 + config set cluster-version 8.5.1 // Set the version of the cluster to 8.5.1 ``` - `replication-mode` controls the replication mode of Regions in the dual data center scenario. See [Enable the DR Auto-Sync mode](/two-data-centers-in-one-city-deployment.md#enable-the-dr-auto-sync-mode) for details. @@ -349,8 +345,6 @@ Usage: - For example, set the value of `flow-round-by-digit` to `4`: - {{< copyable "" >}} - ```bash config set flow-round-by-digit 4 ``` @@ -579,7 +573,7 @@ Usage: } >> member delete name pd2 // Delete "pd2" Success! ->> member delete id 1319539429105371180 // Delete a node using id +>> member delete id 1319539429105371180 // Delete a node using ID Success! >> member leader show // Display the leader information { @@ -804,7 +798,7 @@ Usage: ### `region topread [limit]` -Use this command to list Regions with top read flow. The default value of the limit is 16. +Use this command to list Regions with top read flow. The default value of the limit is `16`. Usage: @@ -818,7 +812,7 @@ Usage: ### `region topwrite [limit]` -Use this command to list Regions with top write flow. The default value of the limit is 16. +Use this command to list Regions with top write flow. The default value of the limit is `16`. Usage: @@ -832,7 +826,7 @@ Usage: ### `region topconfver [limit]` -Use this command to list Regions with top conf version. The default value of the limit is 16. +Use this command to list Regions with top conf version. The default value of the limit is `16`. Usage: @@ -846,7 +840,7 @@ Usage: ### `region topversion [limit]` -Use this command to list Regions with top version. The default value of the limit is 16. +Use this command to list Regions with top version. The default value of the limit is `16`. Usage: @@ -860,7 +854,7 @@ Usage: ### `region topsize [limit]` -Use this command to list Regions with top approximate size. The default value of the limit is 16. +Use this command to list Regions with top approximate size. The default value of the limit is `16`. Usage: @@ -1156,7 +1150,7 @@ store } ``` -To get the store with id of 1, run the following command: +To get the store with ID of 1, run the following command: ```bash store 1 @@ -1168,7 +1162,7 @@ store 1 #### Delete a store -To delete the store with id of 1, run the following command: +To delete the store with ID of 1, run the following command: ```bash store delete 1 @@ -1176,7 +1170,7 @@ store delete 1 To cancel deleting `Offline` state stores which are deleted using `store delete`, run the `store cancel-delete` command. After canceling, the store changes from `Offline` to `Up`. Note that the `store cancel-delete` command cannot change a `Tombstone` state store to the `Up` state. -To cancel deleting the store with id of 1, run the following command: +To cancel deleting the store with ID of 1, run the following command: ```bash store cancel-delete 1 @@ -1196,25 +1190,25 @@ store remove-tombstone To manage the labels of a store, run the `store label` command. -- To set a label with the key being `"zone"` and value being `"cn"` to the store with id of 1, run the following command: +- To set a label with the key being `"zone"` and value being `"cn"` to the store with ID of 1, run the following command: ```bash store label 1 zone=cn ``` -- To update the label of a store, for example, changing the value of the key `"zone"` from `"cn"` to `"us"` for the store with id of 1, run the following command: +- To update the label of a store, for example, changing the value of the key `"zone"` from `"cn"` to `"us"` for the store with ID of 1, run the following command: ```bash store label 1 zone=us ``` -- To rewrite all labels of a store with id of 1, use the `--rewrite` option. Note that this option overwrites all existing labels: +- To rewrite all labels of a store with ID of 1, use the `--rewrite` option. Note that this option overwrites all existing labels: ```bash store label 1 region=us-est-1 disk=ssd --rewrite ``` -- To delete the `"disk"` label for the store with id of 1, use the `--delete` option: +- To delete the `"disk"` label for the store with ID 1, use the `--delete` option: ```bash store label 1 disk --delete @@ -1222,12 +1216,12 @@ To manage the labels of a store, run the `store label` command. > **Note:** > -> - The label of a store is updated by merging the label in TiKV and that in PD. Specifically, after you modify a store label in the TiKV configuration file and restart the cluster, PD merges its own store label with the TiKV store label, updates the label, and persists the merged result. +> - The label of a store is updated by a merge strategy. After a TiKV process is restarted, the store labels in its configuration file will be merged with the store labels stored by PD, and the merged result will be persisted. During the merging process, if there are duplicate store labels between the PD side and the TiKV configuration file, the TiKV store label configuration will overwrite the PD label. For example, if the store label for store 1 is set to `"zone=cn"` through `store label 1 zone=cn`, but TiKV’s configuration file has `zone = "us"`, after TiKV restarts, the `"zone"` will be updated to `"us"`. > - To manage labels of a store using TiUP, you can run the `store label --force` command to empty the labels stored in PD before restarting the cluster. #### Configure store weight -To set the leader weight to 5 and Region weight to 10 for the store with id of 1, run the following command: +To set the leader weight to `5` and Region weight to `10` for the store with ID of 1, run the following command: ```bash store weight 1 5 10 @@ -1251,7 +1245,7 @@ You can set the scheduling speed of stores by using `store limit`. For more deta > **Note:** > -> You can use `pd-ctl` to check the state (`Up`, `Disconnect`, `Offline`, `Down`, or `Tombstone`) of a TiKV store. For the relationship between each state, refer to [Relationship between each state of a TiKV store](/tidb-scheduling.md#information-collection). +> You can use `pd-ctl` to check the state (`Up`, `Disconnect`, `Offline`, `Down`, or `Tombstone`) of a TiKV store. For the relationship between each state, see [Relationship between each state of a TiKV store](/tidb-scheduling.md#information-collection). ### `log [fatal | error | warn | info | debug]` @@ -1313,7 +1307,7 @@ unsafe remove-failed-stores show ### Simplify the output of `store` ```bash ->> store --jq=".stores[].store | { id, address, state_name}" +>> store --jq=".stores[].store | {id, address, state_name}" {"id":1,"address":"127.0.0.1:20161","state_name":"Up"} {"id":30,"address":"127.0.0.1:20162","state_name":"Up"} ... @@ -1330,10 +1324,8 @@ unsafe remove-failed-stores show ### Query all nodes whose status is not `Up` -{{< copyable "" >}} - ```bash -store --jq='.stores[].store | select(.state_name!="Up") | { id, address, state_name}' +store --jq='.stores[].store | select(.state_name!="Up") | {id, address, state_name}' ``` ``` @@ -1347,7 +1339,7 @@ store --jq='.stores[].store | select(.state_name!="Up") | { id, address, state_n {{< copyable "" >}} ```bash -store --jq='.stores[].store | select(.labels | length>0 and contains([{"key":"engine","value":"tiflash"}])) | { id, address, state_name}' +store --jq='.stores[].store | select(.labels | length>0 and contains([{"key":"engine","value":"tiflash"}])) | {id, address, state_name}' ``` ``` @@ -1398,7 +1390,7 @@ You can also find out all Regions that have a replica on store30 or store31 in t ### Look for relevant Regions when restoring data -For example, when [store1, store30, store31] is unavailable at its downtime, you can find all Regions whose Down replicas are more than normal replicas: +For example, when `[store1, store30, store31]` is unavailable at its downtime, you can find all Regions whose Down replicas are more than normal replicas: ```bash >> region --jq=".regions[] | {id: .id, peer_stores: [.peers[].store_id] | select(length as $total | map(if .==(1,30,31) then . else empty end) | length>=$total-length) }" @@ -1408,14 +1400,14 @@ For example, when [store1, store30, store31] is unavailable at its downtime, you ... ``` -Or when [store1, store30, store31] fails to start, you can find Regions where the data can be manually removed safely on store1. In this way, you can filter out all Regions that have a replica on store1 but don't have other DownPeers: +Or when `[store1, store30, store31]` fails to start, you can find Regions where the data can be manually removed safely on store1. In this way, you can filter out all Regions that have a replica on store1 but don't have other DownPeers: ```bash >> region --jq=".regions[] | {id: .id, peer_stores: [.peers[].store_id] | select(length>1 and any(.==1) and all(.!=(30,31)))}" {"id":24,"peer_stores":[1,32,33]} ``` -When [store30, store31] is down, find out all Regions that can be safely processed by creating the `remove-peer` Operator, that is, Regions with one and only DownPeer: +When `[store30, store31]` is down, find out all Regions that can be safely processed by creating the `remove-peer` Operator, that is, Regions with one and only DownPeer: ```bash >> region --jq=".regions[] | {id: .id, remove_peer: [.peers[].store_id] | select(length>1) | map(if .==(30,31) then . else empty end) | select(length==1)}"