Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark: update wording #1055

Merged
merged 2 commits into from
Apr 18, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions benchmark/sysbench-v4.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,9 @@ Place: Beijing
## Test environment

- [Hardware requirements](https://pingcap.com/docs/op-guide/recommendation/)
- The TiDB cluster is deployed according to the [TiDB Deployment Guide](https://pingcap.com/docs/op-guide/ansible-deployment/). Suppose there are 3 servers in total. It is recommended to deploy 1 TiDB, 1 PD and 1 TiKV on each server. As for disk space, suppose that there are 32 tables and 10M rows of data on each table, it is recommended that the disk space where TiKV's data directory resides is larger than 512 GB.
The number of concurrent connections to a single TiDB cluster is recommended to be under 500. If you need to increase the concurrency pressure on the entire system, you can add TiDB instances to the cluster whose number depends on the pressure of the test.

- The TiDB cluster is deployed according to the [TiDB Deployment Guide](https://pingcap.com/docs/op-guide/ansible-deployment/). Suppose there are 3 servers in total. It is recommended to deploy 1 TiDB instance, 1 PD instance and 1 TiKV instance on each server. As for disk space, supposing that there are 32 tables and 10M rows of data on each table, it is recommended that the disk space where TiKV's data directory resides is larger than 512 GB.
The number of concurrent connections to a single TiDB cluster is recommended to be under 500. If you need to increase the concurrency pressure on the entire system, you can add TiDB instances to the cluster whose number depends on the pressure of the test.

IDC machines:

Expand Down Expand Up @@ -56,7 +57,7 @@ IDC machines:

### TiDB configuration

Higher log level means fewer logs to be printed and thus positively influence TiDB performance. Turn on the `prepared plan cache` in the TiDB configuration to lower the cost of optimizing execution plan. Specifically, you can add the following command in the TiDB configuration file:
Higher log level means fewer logs to be printed and thus positively influences TiDB performance. Enable `prepared plan cache` in the TiDB configuration to lower the cost of optimizing execution plan. Specifically, you can add the following command in the TiDB configuration file:

```
[log]
Expand All @@ -69,7 +70,7 @@ enabled = true

Higher log level also means better performance for TiKV.

As TiKV is deployed in clusters, the Raft algorithm can guarantee that data is written into most of the nodes. Therefore, apart from the scenarios where data security is extremely sensitive, the `sync-log` can be turned off in raftstore.
As TiKV is deployed in clusters, the Raft algorithm can guarantee that data is written into most of the nodes. Therefore, except the scenarios where data security is extremely sensitive, `sync-log` can be disabled in raftstore.

There are 2 Column Families (Default CF and Write CF) on TiKV cluster which are mainly used to store different types of data. For the Sysbench test, the Column Family that is used to import data has a constant proportion among TiDB clusters:

Expand Down Expand Up @@ -233,7 +234,7 @@ Take HAproxy as an example. The parameter `nbproc` can increase the number of pr

### Under high concurrency, why is the CPU utilization rate of TiKV still low?

Although the overall CPU utilization rate is low for TiKV, the CPU utilization rate of some modules is the cluster might be high.
Although the overall CPU utilization rate is low for TiKV, the CPU utilization rate of some modules in the cluster might be high.

The maximum concurrency limits for other modules on TiKV, such as storage readpool, coprocessor, and gRPC, can be adjusted through the TiKV configuration file.

Expand Down