Skip to content

Commit

Permalink
[Doc] fix path inconsistency in code in stream load user guide doc (#…
Browse files Browse the repository at this point in the history
  • Loading branch information
amber-create authored Mar 27, 2024
1 parent 1dee418 commit 4d224e1
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 7 deletions.
7 changes: 3 additions & 4 deletions docs/en/loading/StreamLoad.md
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,6 @@ Broker Load is an asynchronous loading method. After you submit a load job, Star
- Currently Broker Load supports loading from a local file system only through a single broker whose version is v2.5 or later.
- Highly concurrent queries against a single broker may cause issues such as timeout and OOM. To mitigate the impact, you can use the `pipeline_dop` variable (see [System variable](../reference/System_variable.md#pipeline_dop)) to set the query parallelism for Broker Load. For queries against a single broker, we recommend that you set `pipeline_dop` to a value smaller than `16`.


### Typical example

Broker Load supports loading from a single data file to a single table, loading from multiple data files to a single table, and loading from multiple data files to multiple tables. This section uses loading from multiple data files to a single table as an example.
Expand All @@ -305,7 +304,7 @@ Note that in StarRocks some literals are used as reserved keywords by the SQL la

#### Prepare datasets

Use the CSV file format as an example. Log in to your local file system, and create two CSV files, `file1.csv` and `file2.csv`, in a specific storage location (for example, `/user/starrocks/`). Both files consist of three columns, which represent the user ID, user name, and user score in sequence.
Use the CSV file format as an example. Log in to your local file system, and create two CSV files, `file1.csv` and `file2.csv`, in a specific storage location (for example, `/home/disk1/business/`). Both files consist of three columns, which represent the user ID, user name, and user score in sequence.

- `file1.csv`

Expand Down Expand Up @@ -351,7 +350,7 @@ PROPERTIES("replication_num"="1");

#### Start a Broker Load

Run the following command to start a Broker Load job that loads data from all data files (`file1.csv` and `file2.csv`) stored in the `/user/starrocks/` path of your local file system to the StarRocks table `mytable`:
Run the following command to start a Broker Load job that loads data from all data files (`file1.csv` and `file2.csv`) stored in the `/home/disk1/business/` path of your local file system to the StarRocks table `mytable`:

```SQL
LOAD LABEL mydatabase.label_local
Expand All @@ -361,7 +360,7 @@ LOAD LABEL mydatabase.label_local
COLUMNS TERMINATED BY ","
(id, name, score)
)
WITH BROKER
WITH BROKER "sole_broker"
PROPERTIES
(
"timeout" = "3600"
Expand Down
6 changes: 3 additions & 3 deletions docs/zh/loading/StreamLoad.md
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ Broker Load 支持导入单个数据文件到单张表、导入多个数据文

#### 数据样例

以 CSV 格式的数据为例,登录本地文件系统,在指定路径(假设为 `/user/starrocks/`)下创建两个 CSV 格式的数据文件,`file1.csv``file2.csv`。两个数据文件都包含三列,分别代表用户 ID、用户姓名和用户得分,如下所示:
以 CSV 格式的数据为例,登录本地文件系统,在指定路径(假设为 `/home/disk1/business/`)下创建两个 CSV 格式的数据文件,`file1.csv``file2.csv`。两个数据文件都包含三列,分别代表用户 ID、用户姓名和用户得分,如下所示:

- `file1.csv`

Expand Down Expand Up @@ -351,7 +351,7 @@ PROPERTIES("replication_num"="1");

#### 提交导入作业

通过如下语句,把本地文件系统的 `/user/starrocks/` 路径下所有数据文件(`file1.csv``file2.csv`)的数据导入到目标表 `mytable`
通过如下语句,把本地文件系统的 `/home/disk1/business/` 路径下所有数据文件(`file1.csv``file2.csv`)的数据导入到目标表 `mytable`

```SQL
LOAD LABEL mydatabase.label_local
Expand Down Expand Up @@ -435,7 +435,7 @@ WHERE LABEL = "label_local";

1. 把 NAS 挂载到所有的 BE、FE 节点,同时保证所有节点的挂载路径完全一致。这样,所有 BE 可以像访问 BE 自己的本地文件一样访问 NAS。

2. 使用 Broker Load 导入数据。
2. 使用 Broker Load 导入数据。例如:

```SQL
LOAD LABEL test_db.label_nas
Expand Down

0 comments on commit 4d224e1

Please sign in to comment.