Skip to content

Commit

Permalink
Preview PR pingcap/docs#19899 and this preview is triggered from commit
Browse files Browse the repository at this point in the history
  • Loading branch information
Docsite Preview Bot committed Jan 7, 2025
1 parent c5e1aac commit 5db6248
Showing 1 changed file with 2 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ TiDB v8.5.0 introduces multiple enhancements to mitigate the impact of cloud dis
Failovers are now available in multiple IO delay scenarios, and P99/999 latency during impacts is reduced by up to 98%.
In the following table of test results, the **Current** column shows the results with IO latency jitter improvements, while the **Original** column shows the results without these improvements:
In the following table of test results, the **Current** column shows the results with improvements to reduce IO latency jitter, while the **Original** column shows the results without these improvements:
<table>
<thead>
Expand Down Expand Up @@ -161,7 +161,7 @@ In the following table of test results, the **Current** column shows the results
Due to the inherent risk of physical disk damage, the cloud disk jitter issue is unavoidable. To mitigate its impact, TiKV introduces a [slow node detection mechanism](https://docs.pingcap.com/tidb/v8.5/pd-scheduling-best-practices#troubleshoot-tikv-node). This mechanism uses [evict-slow-store-scheduler](https://docs.pingcap.com/tidb/v8.5/pd-control#scheduler-show--add--remove--pause--resume--config--describe) to detect and manage slow nodes, reducing the effects of cloud disk jitter.
The severity of disk jitter might also be highly related to users' workload profiles. In latency-sensitive scenarios, designing applications in conjunction with TiDB features can further minimize the impact of IO jitter on applications. For example, in read-heavy and latency-sensitive environments, adjusting the [tikv_client_read_timeout](/system-variables.md#tikv_client_read_timeout-new-in-v740) system variable according to latency requirements and using stale reads or follower reads can enable faster failover retries to other replica peers for KV requests sent from TiDB. This reduces the impact of IO jitter on a single TiKV node and helps improve query latency. Note that the effectiveness of this feature depends on the workload profile, which should be evaluated before implementation.
The severity of disk jitter might also be highly related to users' workload profiles. In latency-sensitive scenarios, designing applications in conjunction with TiDB features can further minimize the impact of IO jitter on applications. For example, in read-heavy and latency-sensitive environments, adjusting the [`tikv_client_read_timeout`](/system-variables.md#tikv_client_read_timeout-new-in-v740) system variable according to latency requirements and using stale reads or follower reads can enable faster failover retries to other replica peers for KV requests sent from TiDB. This reduces the impact of IO jitter on a single TiKV node and helps improve query latency. Note that the effectiveness of this feature depends on the workload profile, which should be evaluated before implementation.
Additionally, cloud users can reduce the probability of jitter by choosing cloud disks with higher performance.
Expand Down

0 comments on commit 5db6248

Please sign in to comment.