From 9e53b404119adc2f024d7a6c47d6f3e24afded70 Mon Sep 17 00:00:00 2001 From: Enwei Date: Wed, 8 Sep 2021 10:48:58 +0200 Subject: [PATCH] BR FAQ: add a warning about multi br importing (#6263) --- br/backup-and-restore-faq.md | 8 ++++++++ br/use-br-command-line-tool.md | 2 ++ 2 files changed, 10 insertions(+) diff --git a/br/backup-and-restore-faq.md b/br/backup-and-restore-faq.md index d9abf4c2aaeb4..c69435fad3bcd 100644 --- a/br/backup-and-restore-faq.md +++ b/br/backup-and-restore-faq.md @@ -162,3 +162,11 @@ BR does not back up statistics (except in v4.0.9). Therefore, after restoring th In v4.0.9, BR backs up statistics by default, which consumes too much memory. To ensure that the backup process goes well, the backup for statistics is disabled by default starting from v4.0.10. If you do not execute `ANALYZE` on the table, TiDB will fail to select the optimized execution plan due to inaccurate statistics. If query performance is not a key concern, you can ignore `ANALYZE`. + +## Can I use multiple BR processes at the same time to restore the data of a single cluster? + +**It is strongly not recommended** to use multiple BR processes at the same time to restore the data of a single cluster for the following reasons: + ++ When BR restores data, it modifies some global configurations of PD. Therefore, if you use multiple BR processes for data restore at the same time, these configurations might be mistakenly overwritten and cause abnormal cluster status. ++ BR consumes a lot of cluster resources to restore data, so in fact, running BR processes in parallel improves the restore speed only to a limited extent. ++ There has been no test for running multiple BR processes in parallel for data restore, so it is not guaranteed to succeed. \ No newline at end of file diff --git a/br/use-br-command-line-tool.md b/br/use-br-command-line-tool.md index f9f5ed28495a7..27a559572d1a6 100644 --- a/br/use-br-command-line-tool.md +++ b/br/use-br-command-line-tool.md @@ -307,6 +307,8 @@ To restore the cluster data, use the `br restore` command. You can add the `full > - Where each peer is scattered to during restore is random. We don't know in advance which node will read which file. > > These can be avoided using shared storage, for example mounting an NFS on the local path, or using S3. With network storage, every node can automatically read every SST file, so these caveats no longer apply. +> +> Also, note that you can only run one restore operation for a single cluster at the same time. Otherwise, unexpected behaviors might occur. For details, see [FAQ](/br/backup-and-restore-faq.md#can-i-use-multiple-br-processes-at-the-same-time-to-restore-the-data-of-a-single-cluster). ### Restore all the backup data