You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 24, 2024. It is now read-only.
What did you do?
Restore a workload with 190G, 6000 tables, according to the backup meta, there are about 42K files backed up.
What did you expect to see?
Restore done, and the region count should be less than 42K (or we nearly must get empty regions), and the number of empty region should be a reasonable value.
What did you see instead?
See figures below.
Figure 1. too many unhealthy regions
Figure 2. too many regions
What version of BR and TiDB/TiKV/PD are you using?
Nightly (2020-7-20)
Operation logs
(The log is too large to be updated... Too many write conflicts with crating tables concurrently...)
The text was updated successfully, but these errors were encountered:
Currently, We split region in two class of keys: the new key of rewrite rules (t{new_table_id}) and the end key of each files. Generally, this did two things:
Split at the start and the last key backed up of the table, so there isn't any records from other tables share the region with these records.
This method make best safety: because for performance, each download RPC can take only one rewrite rule, no table overlapping means each download RPC can always choose the rewrite rule for the table (or the index) to restore.
However, this method make more empty regions, in the example, there are two regions [0,t{new_new_table_id})([1]) and [t{new_table_id}_r{last_record_key_backed_up}, ∞)([2]) become empty.
Maybe the last key of each table or the first key can be omitted. In the many-table workload, if the former table has split at the end, then the latter can reuse the region at [2]. and vice versa.
What did you do?
Restore a workload with 190G, 6000 tables, according to the backup meta, there are about 42K files backed up.
What did you expect to see?
Restore done, and the region count should be less than 42K (or we nearly must get empty regions), and the number of empty region should be a reasonable value.
What did you see instead?
See figures below.
Figure 1. too many unhealthy regions
Figure 2. too many regions
Nightly (2020-7-20)
(The log is too large to be updated... Too many write conflicts with crating tables concurrently...)
The text was updated successfully, but these errors were encountered: