This is an automated email from the ASF dual-hosted git repository.

liaoxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 2ea857dde66 [fix](load) fix typos in load docs (#1238)
2ea857dde66 is described below

commit 2ea857dde66c4004179e2cfe44f3a21ac2f7c463
Author: Kaijie Chen <c...@apache.org>
AuthorDate: Wed Oct 30 22:23:57 2024 +0800

    [fix](load) fix typos in load docs (#1238)
---
 docs/data-operate/import/error-data-handling.md    |  6 +--
 .../import/import-way/broker-load-manual.md        |  4 +-
 .../import/import-way/stream-load-manual.md        |  4 +-
 docs/data-operate/import/load-data-format.md       |  6 +--
 docs/lakehouse/datalake-analytics/hudi.md          |  2 +-
 .../data-operate/import/error-data-handling.md     |  4 +-
 .../import/import-way/broker-load-manual.md        |  4 +-
 .../data-operate/import/load-data-format.md        | 10 ++---
 .../import/import-scenes/external-storage-load.md  |  2 +-
 .../import/import-way/s3-load-manual.md            |  2 +-
 .../data-operate/import/broker-load-manual.md      |  4 +-
 .../data-operate/import/error-data-handling.md     |  4 +-
 .../import/import-way/broker-load-manual.md        |  4 +-
 .../data-operate/import/load-data-format.md        |  8 ++--
 .../data-operate/import/error-data-handling.md     |  4 +-
 .../import/import-way/broker-load-manual.md        |  4 +-
 .../import/import-way/insert-into-manual.md        |  2 +-
 .../import/import-way/stream-load-manual.md        | 46 ++++++++++++----------
 .../data-operate/import/load-data-format.md        |  8 ++--
 .../Load/BROKER-LOAD.md                            |  8 ++--
 .../import/import-scenes/external-storage-load.md  |  2 +-
 .../import/import-way/s3-load-manual.md            |  2 +-
 .../data-operate/import/broker-load-manual.md      |  4 +-
 .../lakehouse/datalake-analytics/hudi.md           |  2 +-
 .../data-operate/import/error-data-handling.md     |  6 +--
 .../import/import-way/broker-load-manual.md        |  4 +-
 .../data-operate/import/load-data-format.md        |  6 +--
 .../lakehouse/datalake-analytics/hudi.md           |  2 +-
 .../data-operate/import/error-data-handling.md     |  6 +--
 .../import/import-way/broker-load-manual.md        |  4 +-
 .../import/import-way/insert-into-manual.md        |  2 +-
 .../import/import-way/stream-load-manual.md        | 36 ++++++++++-------
 .../data-operate/import/load-data-format.md        |  8 ++--
 .../lakehouse/datalake-analytics/hudi.md           |  2 +-
 .../Load/BROKER-LOAD.md                            |  8 ++--
 35 files changed, 121 insertions(+), 109 deletions(-)

diff --git a/docs/data-operate/import/error-data-handling.md 
b/docs/data-operate/import/error-data-handling.md
index fc0c7438905..a695936a764 100644
--- a/docs/data-operate/import/error-data-handling.md
+++ b/docs/data-operate/import/error-data-handling.md
@@ -139,7 +139,7 @@ By default, strict mode is set to False, which means it is 
disabled. The method
 [BROKER LOAD](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
        DATA INFILE("s3://your_bucket_name/your_file.txt")
        INTO TABLE load_test
@@ -227,7 +227,7 @@ The default value of `max_filter_ratio` is 0, which means 
that if there is any e
 [Broker Load](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
@@ -279,4 +279,4 @@ The default value of `max_filter_ratio` is 0, which means 
that if there is any e
    ```
 :::tip
 The `insert_max_filter_ratio` only takes effect when the value of 
`enable_insert_strict` is `false`, and it is used to control the maximum error 
rate of `INSERT INTO FROM S3/HDFS/LOCAL()`. The default value is 1.0, which 
means tolerating all errors.
-:::
\ No newline at end of file
+:::
diff --git a/docs/data-operate/import/import-way/broker-load-manual.md 
b/docs/data-operate/import/import-way/broker-load-manual.md
index d417103d64c..715dc1847d5 100644
--- a/docs/data-operate/import/import-way/broker-load-manual.md
+++ b/docs/data-operate/import/import-way/broker-load-manual.md
@@ -501,7 +501,7 @@ Doris supports importing data directly from object storage 
systems that support
 ### Import example
 
 ```sql
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
@@ -717,4 +717,4 @@ In general, user environments may not reach speeds of 
10M/s, so it is recommende
 
 ## More Help
 
-For more detailed syntax and best practices for using  [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 , please refer to the Broker Load command manual. You can also enter HELP 
BROKER LOAD in the MySQL client command line to obtain more help information.
\ No newline at end of file
+For more detailed syntax and best practices for using  [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 , please refer to the Broker Load command manual. You can also enter HELP 
BROKER LOAD in the MySQL client command line to obtain more help information.
diff --git a/docs/data-operate/import/import-way/stream-load-manual.md 
b/docs/data-operate/import/import-way/stream-load-manual.md
index d5769a4293a..c6937239594 100644
--- a/docs/data-operate/import/import-way/stream-load-manual.md
+++ b/docs/data-operate/import/import-way/stream-load-manual.md
@@ -1055,7 +1055,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 ### Label, loading transaction, multi-table atomicity
 
-All load jobs in Doris are atomically effective. And multiple tables loading 
in the same load job can also guarantee atomicity. At the same time, Doris can 
also use the Label mechanism to ensure that data loading is not lost or 
duplicated. For specific instructions, please refer to the [Import Transactions 
and Atomicity](../../../data-operate/import/load-atomicity) documentation.
+All load jobs in Doris are atomically effective. And multiple tables loading 
in the same load job can also guarantee atomicity. At the same time, Doris can 
also use the Label mechanism to ensure that data loading is not lost or 
duplicated. For specific instructions, please refer to the [Import Transactions 
and Atomicity](../../../data-operate/transaction) documentation.
 
 ### Column mapping, derived columns, and filtering
 
@@ -1063,7 +1063,7 @@ Doris supports a very rich set of column transformations 
and filtering operation
 
 ### Enable strict mode import
 
-The strict_mode attribute is used to set whether the import task runs in 
strict mode. This attribute affects the results of column mapping, 
transformation, and filtering, and it also controls the behavior of partial 
column updates. For specific instructions on strict mode, please refer to the 
[Strict Mode](../../../data-operate/import/load-strict-mode) documentation.
+The strict_mode attribute is used to set whether the import task runs in 
strict mode. This attribute affects the results of column mapping, 
transformation, and filtering, and it also controls the behavior of partial 
column updates. For specific instructions on strict mode, please refer to the 
[Error Data Handling](../../../data-operate/import/error-data-handling) 
documentation.
 
 ### Perform partial column updates/flexible partial update during import
 
diff --git a/docs/data-operate/import/load-data-format.md 
b/docs/data-operate/import/load-data-format.md
index a69d92c57db..80cf760cb16 100644
--- a/docs/data-operate/import/load-data-format.md
+++ b/docs/data-operate/import/load-data-format.md
@@ -70,7 +70,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.txt")
     INTO TABLE load_test
@@ -742,7 +742,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.parquet")
     INTO TABLE load_test
@@ -779,7 +779,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.orc")
     INTO TABLE load_test
diff --git a/docs/lakehouse/datalake-analytics/hudi.md 
b/docs/lakehouse/datalake-analytics/hudi.md
index c7e6783bce3..955981b97c9 100644
--- a/docs/lakehouse/datalake-analytics/hudi.md
+++ b/docs/lakehouse/datalake-analytics/hudi.md
@@ -78,7 +78,7 @@ Doris uses the parquet native reader to read the data files 
of the COW table, an
 |      numNodes=6                                                              
|
 |      hudiNativeReadSplits=717/810                                            
|
 ```
-Users can view the perfomace of Java SDK through 
[profile](../../admin-manual/fe/profile-action.md), for exmpale:
+Users can view the perfomace of Java SDK through 
[profile](../../admin-manual/fe/profile-action.md), for example:
 ```
 -  HudiJniScanner:  0ns
   -  FillBlockTime:  31.29ms
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/error-data-handling.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/error-data-handling.md
index aa281d16321..ccc050246a6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/error-data-handling.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/error-data-handling.md
@@ -139,7 +139,7 @@ curl  --location-trusted -u root -H "partial_columns:true" 
-H "strict_mode:true"
 [BROKER LOAD](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
        DATA INFILE("s3://your_bucket_name/your_file.txt")
        INTO TABLE load_test
@@ -227,7 +227,7 @@ curl  --location-trusted -u root -H "partial_columns:true" 
-H "strict_mode:true"
 [Broker Load](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md
index 19515f4d413..1989caf79c8 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md
@@ -502,7 +502,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
 ### 导入示例
 
 ```sql
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
@@ -735,4 +735,4 @@ bucket 信息填写不正确或者不存在。或者 bucket 的格式不受支
 
 ## 更多帮助
 
-关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 命令手册,你也可以在 MySQL 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
\ No newline at end of file
+关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 命令手册,你也可以在 MySQL 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/load-data-format.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/load-data-format.md
index 06d2a7b075b..fbcfbcaa4fe 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/load-data-format.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/load-data-format.md
@@ -62,7 +62,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
     -H "line_delimiter:\n" \
     -H "columns_delimiter:|" \
     -H "enclose:'" \
-    -H "escape:\" \
+    -H 'escape:\' \
     -H "skip_lines:2" \
     -T streamload_example.csv \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
@@ -70,7 +70,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.txt")
     INTO TABLE load_test
@@ -741,7 +741,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.parquet")
     INTO TABLE load_test
@@ -778,7 +778,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.orc")
     INTO TABLE load_test
@@ -791,4 +791,4 @@ WITH S3
     "AWS_SECRET_KEY"="AWS_SECRET_KEY",
     "AWS_REGION" = "AWS_REGION"
 );
-```
\ No newline at end of file
+```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-scenes/external-storage-load.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-scenes/external-storage-load.md
index e64b27879a6..bfede3d95d1 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-scenes/external-storage-load.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-scenes/external-storage-load.md
@@ -147,7 +147,7 @@ Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operat
 
 完整示例如下
 ```
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-way/s3-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-way/s3-load-manual.md
index e15eebce12e..6fd64776dd4 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-way/s3-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-way/s3-load-manual.md
@@ -59,7 +59,7 @@ under the License.
 完整示例如下
 
 ```sql
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/broker-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/broker-load-manual.md
index 2e5f2894cac..d974c4c0c67 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/broker-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/broker-load-manual.md
@@ -502,7 +502,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
 ### 导入示例
 
 ```sql
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
@@ -735,4 +735,4 @@ bucket 信息填写不正确或者不存在。或者 bucket 的格式不受支
 
 ## 更多帮助
 
-关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker 
Load](../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD)
 命令手册,你也可以在 MySQL 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
\ No newline at end of file
+关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker 
Load](../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD)
 命令手册,你也可以在 MySQL 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/error-data-handling.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/error-data-handling.md
index aa281d16321..ccc050246a6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/error-data-handling.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/error-data-handling.md
@@ -139,7 +139,7 @@ curl  --location-trusted -u root -H "partial_columns:true" 
-H "strict_mode:true"
 [BROKER LOAD](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
        DATA INFILE("s3://your_bucket_name/your_file.txt")
        INTO TABLE load_test
@@ -227,7 +227,7 @@ curl  --location-trusted -u root -H "partial_columns:true" 
-H "strict_mode:true"
 [Broker Load](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
index 19515f4d413..1989caf79c8 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
@@ -502,7 +502,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
 ### 导入示例
 
 ```sql
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
@@ -735,4 +735,4 @@ bucket 信息填写不正确或者不存在。或者 bucket 的格式不受支
 
 ## 更多帮助
 
-关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 命令手册,你也可以在 MySQL 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
\ No newline at end of file
+关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 命令手册,你也可以在 MySQL 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/load-data-format.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/load-data-format.md
index 06d2a7b075b..e35a6583e16 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/load-data-format.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/load-data-format.md
@@ -70,7 +70,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.txt")
     INTO TABLE load_test
@@ -741,7 +741,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.parquet")
     INTO TABLE load_test
@@ -778,7 +778,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.orc")
     INTO TABLE load_test
@@ -791,4 +791,4 @@ WITH S3
     "AWS_SECRET_KEY"="AWS_SECRET_KEY",
     "AWS_REGION" = "AWS_REGION"
 );
-```
\ No newline at end of file
+```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/error-data-handling.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/error-data-handling.md
index aa281d16321..ccc050246a6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/error-data-handling.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/error-data-handling.md
@@ -139,7 +139,7 @@ curl  --location-trusted -u root -H "partial_columns:true" 
-H "strict_mode:true"
 [BROKER LOAD](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
        DATA INFILE("s3://your_bucket_name/your_file.txt")
        INTO TABLE load_test
@@ -227,7 +227,7 @@ curl  --location-trusted -u root -H "partial_columns:true" 
-H "strict_mode:true"
 [Broker Load](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
index 19515f4d413..1989caf79c8 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
@@ -502,7 +502,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
 ### 导入示例
 
 ```sql
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
@@ -735,4 +735,4 @@ bucket 信息填写不正确或者不存在。或者 bucket 的格式不受支
 
 ## 更多帮助
 
-关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 命令手册,你也可以在 MySQL 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
\ No newline at end of file
+关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 命令手册,你也可以在 MySQL 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
index 33db963ac45..fd60f94ef30 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
@@ -134,7 +134,7 @@ MySQL> SELECT COUNT(*) FROM testdb.test_table2;
 
 4. 可以使用 [JOB](../../scheduler/job-scheduler.md) 异步执行 INSERT。
 
-5. 数据源可以是 [tvf](../../../lakehouse/file.md) 或者 
[catalog](../../../lakehouse/database) 中的表。
+5. 数据源可以是 [tvf](../../../lakehouse/file.md) 或者 
[catalog](../../../lakehouse/database/jdbc) 中的表。
 
 ### 查看导入作业
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
index 4c37e001b79..47503161fc6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
@@ -420,13 +420,15 @@ curl --location-trusted -u user:passwd [-H "sql: 
${load_sql}"...] -T data.file -
 load_sql 举例:
 
 ```shell
-insert into db.table (col, ...) select stream_col, ... from 
http_stream("property1"="value1");
+insert into db.table (col1, col2, ...) select c1, c2, ... from 
http_stream("property1"="value1");
 ```
 
 http_stream 支持的参数:
 
 "column_separator" = ",", "format" = "CSV",
 
+导入 CSV 文件时,`select ... from http_stream` 子句中的列名格式必须为 `c1, c2, c3, ...`,见下方示例
+
 ...
 
 示例:
@@ -642,9 +644,9 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 curl --location-trusted -u <doris_user>:<doris_password> \
     -H "Expect:100-continue" \
     -H "merge_type: DELETE" \
-    -H "function_column.sequence_col: age" 
+    -H "function_column.sequence_col: age" \
     -H "column_separator:," \
-    -H "columns: name, gender, age" 
+    -H "columns: name, gender, age" \
     -T streamload_example.csv \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
 ```
@@ -728,26 +730,25 @@ li,male,9
 如下列数据中,列中包含了分隔符 `,`:
 
 ```sql
-张三,30,'上海市,黄浦区,大沽路'
+张三,30,'上海市,黄浦区,大沽路'
 ```
 
-通过制定包围符`'`,可以将“上海市,黄浦区,大沽路”指定为一个字段:
+通过制定包围符`'`,可以将“上海市,黄浦区,大沽路”指定为一个字段:
 
 ```sql
 curl --location-trusted -u <doris_user>:<doris_password> \
     -H "Expect:100-continue" \
     -H "column_separator:," \
     -H "enclose:'" \
-    -H "escape:\" \
     -H "columns:username,age,address" \
     -T streamload_example.csv \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
 ```
 
-如果包围字符也出现在字段中,如希望将“上海市,黄浦区,'大沽路”作为一个字段,需要先在列中进行字符串转义:
+如果包围字符也出现在字段中,如希望将“上海市,黄浦区,'大沽路”作为一个字段,需要先在列中进行字符串转义:
 
 ```sql
-张三,30,'上海市,黄浦区,\'大沽路'
+张三,30,'上海市,黄浦区,\'大沽路'
 ```
 
 可以通过 escape 参数可以指定单字节转义字符,如下例中 `\`:
@@ -757,6 +758,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
     -H "Expect:100-continue" \
     -H "column_separator:," \
     -H "enclose:'" \
+    -H 'escape:\' \
     -H "columns:username,age,address" \
     -T streamload_example.csv \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
@@ -769,15 +771,19 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 表结构:
 
 ```sql
-`id` bigint(30) NOT NULL,
-`order_code` varchar(30) DEFAULT NULL COMMENT '',
-`create_time` datetimev2(3) DEFAULT CURRENT_TIMESTAMP
+CREATE TABLE testDb.testTbl (
+    `id` BIGINT(30) NOT NULL,
+    `order_code` VARCHAR(30) DEFAULT NULL COMMENT '',
+    `create_time` DATETIMEv2(3) DEFAULT CURRENT_TIMESTAMP
+)
+DUPLICATE KEY(id)
+DISTRIBUTED BY HASH(id) BUCKETS 10;
 ```
 
 JSON 数据格式:
 
 ```Plain
-{"id":1,"order_Code":"avc"}
+{"id":1,"order_code":"avc"}
 ```
 
 导入命令:
@@ -826,7 +832,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
     -H "Expect:100-continue" \
     -H "format:json" \
     -H "strip_outer_array:true" \
-    -T streamload_example.csv \
+    -T streamload_example.json \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
 ```
 
@@ -858,7 +864,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
     -H "strip_outer_array:true" \
     -H "jsonpaths:[\"$.userid\", \"$.username\", \"$.userage\"]" \
     -H "columns:user_id,name,age" \
-    -T streamload_example.csv \
+    -T streamload_example.json \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
 ```
 
@@ -1003,7 +1009,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 CREATE TABLE testdb.test_streamload(
     typ_id     BIGINT                NULL   COMMENT "ID",
     hou        VARCHAR(10)           NULL   COMMENT "one",
-    arr        BITMAP  BITMAP_UNION  NULL   COMMENT "two"
+    arr        BITMAP  BITMAP_UNION  NOT NULL   COMMENT "two"
 )
 AGGREGATE KEY(typ_id,hou)
 DISTRIBUTED BY HASH(typ_id,hou) BUCKETS 10;
@@ -1014,7 +1020,7 @@ DISTRIBUTED BY HASH(typ_id,hou) BUCKETS 10;
 ```sql
 curl --location-trusted -u <doris_user>:<doris_password> \
     -H "Expect:100-continue" \
-    -H "columns:typ_id,hou,arr,arr=to_bitmap(arr)"
+    -H "columns:typ_id,hou,arr,arr=to_bitmap(arr)" \
     -T streamload_example.csv \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
 ```
@@ -1042,7 +1048,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 CREATE TABLE testdb.test_streamload(
     typ_id           BIGINT          NULL   COMMENT "ID",
     typ_name         VARCHAR(10)     NULL   COMMENT "NAME",
-    pv               hll hll_union   NULL   COMMENT "hll"
+    pv               hll hll_union   NOT NULL   COMMENT "hll"
 )
 AGGREGATE KEY(typ_id,typ_name)
 DISTRIBUTED BY HASH(typ_id) BUCKETS 10;
@@ -1060,7 +1066,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 ### Label、导入事务、多表原子性
 
-Doris 中所有导入任务都是原子生效的。并且在同一个导入任务中对多张表的导入也能够保证原子性。同时,Doris 还可以通过 Label 
的机制来保证数据导入的不丢不重。具体说明可以参阅 
[导入事务和原子性](../../../data-operate/import/load-atomicity) 文档。
+Doris 中所有导入任务都是原子生效的。并且在同一个导入任务中对多张表的导入也能够保证原子性。同时,Doris 还可以通过 Label 
的机制来保证数据导入的不丢不重。具体说明可以参阅 [导入事务和原子性](../../../data-operate/transaction) 文档。
 
 ### 列映射、衍生列和过滤
 
@@ -1068,7 +1074,7 @@ Doris 可以在导入语句中支持非常丰富的列转换和过滤操作。
 
 ### 启用严格模式导入
 
-`strict_mode` 
属性用于设置导入任务是否运行在严格模式下。该属性会对列映射、转换和过滤的结果产生影响,它同时也将控制部分列更新的行为。关于严格模式的具体说明,可参阅 
[严格模式](../../../data-operate/import/load-strict-mode) 文档。
+`strict_mode` 
属性用于设置导入任务是否运行在严格模式下。该属性会对列映射、转换和过滤的结果产生影响,它同时也将控制部分列更新的行为。关于严格模式的具体说明,可参阅 
[错误数据处理](../../../data-operate/import/error-data-handling) 文档。
 
 ### 导入时进行部分列更新
 
@@ -1076,4 +1082,4 @@ Doris 可以在导入语句中支持非常丰富的列转换和过滤操作。
 
 ## 更多帮助
 
-关于 Stream Load 使用的更多详细语法及最佳实践,请参阅 [Stream 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/STREAM-LOAD)
 命令手册,你也可以在 MySQL 客户端命令行下输入 `HELP STREAM LOAD` 获取更多帮助信息。
\ No newline at end of file
+关于 Stream Load 使用的更多详细语法及最佳实践,请参阅 [Stream 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/STREAM-LOAD)
 命令手册,你也可以在 MySQL 客户端命令行下输入 `HELP STREAM LOAD` 获取更多帮助信息。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/load-data-format.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/load-data-format.md
index 06d2a7b075b..e35a6583e16 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/load-data-format.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/load-data-format.md
@@ -70,7 +70,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.txt")
     INTO TABLE load_test
@@ -741,7 +741,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.parquet")
     INTO TABLE load_test
@@ -778,7 +778,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.orc")
     INTO TABLE load_test
@@ -791,4 +791,4 @@ WITH S3
     "AWS_SECRET_KEY"="AWS_SECRET_KEY",
     "AWS_REGION" = "AWS_REGION"
 );
-```
\ No newline at end of file
+```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD.md
index 9dd1f940090..bdb33f3bada 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD.md
@@ -158,7 +158,7 @@ WITH BROKER broker_name
 
 - `broker_properties`
 
-  指定 broker 所需的信息。这些信息通常被用于 Broker 能够访问远端存储系统。如 BOS 或 HDFS。关于具体信息,可参阅 
[Broker](../../../../advanced/broker.md) 文档。
+  指定 broker 所需的信息。这些信息通常被用于 Broker 能够访问远端存储系统。如 BOS 或 HDFS。关于具体信息,可参阅 [Broker 
Load](../../../../data-operate/import/broker-load-manual) 文档。
 
   ```text
   (
@@ -249,7 +249,7 @@ WITH BROKER broker_name
        SET (
            k2 = tmp_k2 + 1,
            k3 = tmp_k3 + 1
-       )
+       ),
        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file-20*")
        INTO TABLE `my_table2`
        COLUMNS TERMINATED BY ","
@@ -451,7 +451,7 @@ WITH BROKER broker_name
         FORMAT AS "json"
         PROPERTIES(
           "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.city, $.code]"
+          "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
         )       
     )
     with HDFS (
@@ -477,7 +477,7 @@ WITH BROKER broker_name
         SET (id = id * 10)
         PROPERTIES(
           "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.code, $.city]"
+          "jsonpaths" = "[\"$.id\", \"$.code\", \"$.city\"]"
         )       
     )
     with HDFS (
diff --git 
a/versioned_docs/version-1.2/data-operate/import/import-scenes/external-storage-load.md
 
b/versioned_docs/version-1.2/data-operate/import/import-scenes/external-storage-load.md
index 48840ee25fd..c1cf7696e85 100644
--- 
a/versioned_docs/version-1.2/data-operate/import/import-scenes/external-storage-load.md
+++ 
b/versioned_docs/version-1.2/data-operate/import/import-scenes/external-storage-load.md
@@ -141,7 +141,7 @@ Like [Broker 
Load](../../../data-operate/import/import-way/broker-load-manual.md
 
 example:
 ```
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
diff --git 
a/versioned_docs/version-1.2/data-operate/import/import-way/s3-load-manual.md 
b/versioned_docs/version-1.2/data-operate/import/import-way/s3-load-manual.md
index 3cf5dd0052a..301f35f4532 100644
--- 
a/versioned_docs/version-1.2/data-operate/import/import-way/s3-load-manual.md
+++ 
b/versioned_docs/version-1.2/data-operate/import/import-way/s3-load-manual.md
@@ -59,7 +59,7 @@ Like Broker Load just replace `WITH BROKER broker_name ()` 
with
 example:
 
 ```sql
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
diff --git 
a/versioned_docs/version-2.0/data-operate/import/broker-load-manual.md 
b/versioned_docs/version-2.0/data-operate/import/broker-load-manual.md
index 4ae5847da93..17fcc968d7c 100644
--- a/versioned_docs/version-2.0/data-operate/import/broker-load-manual.md
+++ b/versioned_docs/version-2.0/data-operate/import/broker-load-manual.md
@@ -501,7 +501,7 @@ Doris supports importing data directly from object storage 
systems that support
 ### Import example
 
 ```sql
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
@@ -717,4 +717,4 @@ In general, user environments may not reach speeds of 
10M/s, so it is recommende
 
 ## More Help
 
-For more detailed syntax and best practices for using  [Broker 
Load](../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD)
 , please refer to the Broker Load command manual. You can also enter HELP 
BROKER LOAD in the MySQL client command line to obtain more help information.
\ No newline at end of file
+For more detailed syntax and best practices for using  [Broker 
Load](../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD)
 , please refer to the Broker Load command manual. You can also enter HELP 
BROKER LOAD in the MySQL client command line to obtain more help information.
diff --git a/versioned_docs/version-2.0/lakehouse/datalake-analytics/hudi.md 
b/versioned_docs/version-2.0/lakehouse/datalake-analytics/hudi.md
index d73b9fec20b..0d7df2a990b 100644
--- a/versioned_docs/version-2.0/lakehouse/datalake-analytics/hudi.md
+++ b/versioned_docs/version-2.0/lakehouse/datalake-analytics/hudi.md
@@ -78,7 +78,7 @@ Doris uses the parquet native reader to read the data files 
of the COW table, an
 |      hudiNativeReadSplits=717/810                                            
|
 ```
 
-Users can view the perfomace of Java SDK through 
[profile](../../admin-manual/fe/profile-action), for exmpale:
+Users can view the perfomace of Java SDK through 
[profile](../../admin-manual/fe/profile-action), for example:
 
 ```
 -  HudiJniScanner:  0ns
diff --git 
a/versioned_docs/version-2.1/data-operate/import/error-data-handling.md 
b/versioned_docs/version-2.1/data-operate/import/error-data-handling.md
index fc0c7438905..a695936a764 100644
--- a/versioned_docs/version-2.1/data-operate/import/error-data-handling.md
+++ b/versioned_docs/version-2.1/data-operate/import/error-data-handling.md
@@ -139,7 +139,7 @@ By default, strict mode is set to False, which means it is 
disabled. The method
 [BROKER LOAD](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
        DATA INFILE("s3://your_bucket_name/your_file.txt")
        INTO TABLE load_test
@@ -227,7 +227,7 @@ The default value of `max_filter_ratio` is 0, which means 
that if there is any e
 [Broker Load](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
@@ -279,4 +279,4 @@ The default value of `max_filter_ratio` is 0, which means 
that if there is any e
    ```
 :::tip
 The `insert_max_filter_ratio` only takes effect when the value of 
`enable_insert_strict` is `false`, and it is used to control the maximum error 
rate of `INSERT INTO FROM S3/HDFS/LOCAL()`. The default value is 1.0, which 
means tolerating all errors.
-:::
\ No newline at end of file
+:::
diff --git 
a/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
 
b/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
index d417103d64c..715dc1847d5 100644
--- 
a/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
+++ 
b/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
@@ -501,7 +501,7 @@ Doris supports importing data directly from object storage 
systems that support
 ### Import example
 
 ```sql
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
@@ -717,4 +717,4 @@ In general, user environments may not reach speeds of 
10M/s, so it is recommende
 
 ## More Help
 
-For more detailed syntax and best practices for using  [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 , please refer to the Broker Load command manual. You can also enter HELP 
BROKER LOAD in the MySQL client command line to obtain more help information.
\ No newline at end of file
+For more detailed syntax and best practices for using  [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 , please refer to the Broker Load command manual. You can also enter HELP 
BROKER LOAD in the MySQL client command line to obtain more help information.
diff --git a/versioned_docs/version-2.1/data-operate/import/load-data-format.md 
b/versioned_docs/version-2.1/data-operate/import/load-data-format.md
index a69d92c57db..80cf760cb16 100644
--- a/versioned_docs/version-2.1/data-operate/import/load-data-format.md
+++ b/versioned_docs/version-2.1/data-operate/import/load-data-format.md
@@ -70,7 +70,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.txt")
     INTO TABLE load_test
@@ -742,7 +742,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.parquet")
     INTO TABLE load_test
@@ -779,7 +779,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.orc")
     INTO TABLE load_test
diff --git a/versioned_docs/version-2.1/lakehouse/datalake-analytics/hudi.md 
b/versioned_docs/version-2.1/lakehouse/datalake-analytics/hudi.md
index c7e6783bce3..955981b97c9 100644
--- a/versioned_docs/version-2.1/lakehouse/datalake-analytics/hudi.md
+++ b/versioned_docs/version-2.1/lakehouse/datalake-analytics/hudi.md
@@ -78,7 +78,7 @@ Doris uses the parquet native reader to read the data files 
of the COW table, an
 |      numNodes=6                                                              
|
 |      hudiNativeReadSplits=717/810                                            
|
 ```
-Users can view the perfomace of Java SDK through 
[profile](../../admin-manual/fe/profile-action.md), for exmpale:
+Users can view the perfomace of Java SDK through 
[profile](../../admin-manual/fe/profile-action.md), for example:
 ```
 -  HudiJniScanner:  0ns
   -  FillBlockTime:  31.29ms
diff --git 
a/versioned_docs/version-3.0/data-operate/import/error-data-handling.md 
b/versioned_docs/version-3.0/data-operate/import/error-data-handling.md
index fc0c7438905..a695936a764 100644
--- a/versioned_docs/version-3.0/data-operate/import/error-data-handling.md
+++ b/versioned_docs/version-3.0/data-operate/import/error-data-handling.md
@@ -139,7 +139,7 @@ By default, strict mode is set to False, which means it is 
disabled. The method
 [BROKER LOAD](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
        DATA INFILE("s3://your_bucket_name/your_file.txt")
        INTO TABLE load_test
@@ -227,7 +227,7 @@ The default value of `max_filter_ratio` is 0, which means 
that if there is any e
 [Broker Load](./import-way/broker-load-manual.md)
 
    ```sql
-   LOAD LABEL example_db.exmpale_label_1
+   LOAD LABEL example_db.example_label_1
    (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
@@ -279,4 +279,4 @@ The default value of `max_filter_ratio` is 0, which means 
that if there is any e
    ```
 :::tip
 The `insert_max_filter_ratio` only takes effect when the value of 
`enable_insert_strict` is `false`, and it is used to control the maximum error 
rate of `INSERT INTO FROM S3/HDFS/LOCAL()`. The default value is 1.0, which 
means tolerating all errors.
-:::
\ No newline at end of file
+:::
diff --git 
a/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
 
b/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
index d417103d64c..715dc1847d5 100644
--- 
a/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
+++ 
b/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
@@ -501,7 +501,7 @@ Doris supports importing data directly from object storage 
systems that support
 ### Import example
 
 ```sql
-    LOAD LABEL example_db.exmpale_label_1
+    LOAD LABEL example_db.example_label_1
     (
         DATA INFILE("s3://your_bucket_name/your_file.txt")
         INTO TABLE load_test
@@ -717,4 +717,4 @@ In general, user environments may not reach speeds of 
10M/s, so it is recommende
 
 ## More Help
 
-For more detailed syntax and best practices for using  [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 , please refer to the Broker Load command manual. You can also enter HELP 
BROKER LOAD in the MySQL client command line to obtain more help information.
\ No newline at end of file
+For more detailed syntax and best practices for using  [Broker 
Load](../../../sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD)
 , please refer to the Broker Load command manual. You can also enter HELP 
BROKER LOAD in the MySQL client command line to obtain more help information.
diff --git 
a/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
 
b/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
index af01a9224c5..8096a3bb4dd 100644
--- 
a/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
+++ 
b/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
@@ -129,7 +129,7 @@ MySQL> SELECT COUNT(*) FROM testdb.test_table2;
 
 4. You can use [JOB](../../scheduler/job-scheduler.md) make the INSERT 
operation execute asynchronously.
 
-5. Sources can be [tvf](../../../lakehouse/file.md) or tables in a 
[catalog](../../../lakehouse/database).
+5. Sources can be [tvf](../../../lakehouse/file.md) or tables in a 
[catalog](../../../lakehouse/database/jdbc).
 
 ### View INSERT INTO jobs
 
diff --git 
a/versioned_docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
 
b/versioned_docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
index a5fc94fcb3b..f4c575ce00f 100644
--- 
a/versioned_docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
+++ 
b/versioned_docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
@@ -413,16 +413,18 @@ Adding a SQL parameter in the header to replace the 
previous parameters such as
 Example of load SQL:
 
 ```shell
-insert into db.table (col, ...) select stream_col, ... from 
http_stream("property1"="value1");
+insert into db.table (col1, col2, ...) select c1, c2, ... from 
http_stream("property1"="value1");
 ```
 
 http_stream parameter:
 
 - "column_separator" = ","
-
 - "format" = "CSV"
 - ...
 
+When loading CSV data from http_stream, the column name in `select ... from 
http_stream` must be in format of `c1, c2, ...`.
+See the example below.
+
 For example:
 
 ```Plain
@@ -636,9 +638,9 @@ When a table with a Unique Key has a Sequence column, the 
value of the Sequence
 curl --location-trusted -u <doris_user>:<doris_password> \
     -H "Expect:100-continue" \
     -H "merge_type: DELETE" \
-    -H "function_column.sequence_col: age" 
+    -H "function_column.sequence_col: age" \
     -H "column_separator:," \
-    -H "columns: name, gender, age" 
+    -H "columns: name, gender, age" \
     -T streamload_example.csv \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
 ```
@@ -732,7 +734,6 @@ curl --location-trusted -u <doris_user>:<doris_password> \
     -H "Expect:100-continue" \
     -H "column_separator:," \
     -H "enclose:'" \
-    -H "escape:\" \
     -H "columns:username,age,address" \
     -T streamload_example.csv \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
@@ -751,6 +752,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
     -H "Expect:100-continue" \
     -H "column_separator:," \
     -H "enclose:'" \
+    -H 'escape:\' \
     -H "columns:username,age,address" \
     -T streamload_example.csv \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
@@ -763,15 +765,19 @@ Here's an example of loading data into a table that 
contains a field with the DE
 Table schema:
 
 ```sql
-`id` bigint(30) NOT NULL,
-`order_code` varchar(30) DEFAULT NULL COMMENT '',
-`create_time` datetimev2(3) DEFAULT CURRENT_TIMESTAMP
+CREATE TABLE testDb.testTbl (
+    `id` BIGINT(30) NOT NULL,
+    `order_code` VARCHAR(30) DEFAULT NULL COMMENT '',
+    `create_time` DATETIMEv2(3) DEFAULT CURRENT_TIMESTAMP
+)
+DUPLICATE KEY(id)
+DISTRIBUTED BY HASH(id) BUCKETS 10;
 ```
 
 JSON data type:
 
 ```Plain
-{"id":1,"order_Code":"avc"}
+{"id":1,"order_code":"avc"}
 ```
 
 Command:
@@ -820,7 +826,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
     -H "Expect:100-continue" \
     -H "format:json" \
     -H "strip_outer_array:true" \
-    -T streamload_example.csv \
+    -T streamload_example.json \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
 ```
 
@@ -852,7 +858,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
     -H "strip_outer_array:true" \
     -H "jsonpaths:[\"$.userid\", \"$.username\", \"$.userage\"]" \
     -H "columns:user_id,name,age" \
-    -T streamload_example.csv \
+    -T streamload_example.json \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
 ```
 
@@ -997,7 +1003,7 @@ Load data into the following table containing the Bitmap 
type:
 CREATE TABLE testdb.test_streamload(
     typ_id     BIGINT                NULL   COMMENT "ID",
     hou        VARCHAR(10)           NULL   COMMENT "one",
-    arr        BITMAP  BITMAP_UNION  NULL   COMMENT "two"
+    arr        BITMAP  BITMAP_UNION  NOT NULL   COMMENT "two"
 )
 AGGREGATE KEY(typ_id,hou)
 DISTRIBUTED BY HASH(typ_id,hou) BUCKETS 10;
@@ -1008,7 +1014,7 @@ And use to_bitmap to convert the data into the Bitmap 
type.
 ```sql
 curl --location-trusted -u <doris_user>:<doris_password> \
     -H "Expect:100-continue" \
-    -H "columns:typ_id,hou,arr,arr=to_bitmap(arr)"
+    -H "columns:typ_id,hou,arr,arr=to_bitmap(arr)" \
     -T streamload_example.csv \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
 ```
@@ -1036,7 +1042,7 @@ Load data into the following table:
 CREATE TABLE testdb.test_streamload(
     typ_id           BIGINT          NULL   COMMENT "ID",
     typ_name         VARCHAR(10)     NULL   COMMENT "NAME",
-    pv               hll hll_union   NULL   COMMENT "hll"
+    pv               hll hll_union   NOT NULL   COMMENT "hll"
 )
 AGGREGATE KEY(typ_id,typ_name)
 DISTRIBUTED BY HASH(typ_id) BUCKETS 10;
@@ -1290,7 +1296,7 @@ curl --location-trusted -u user:passwd [-H "sql: 
${load_sql}"...] -T data.file -
 
 
 # -- load_sql
-# insert into db.table (col, ...) select stream_col, ... from 
http_stream("property1"="value1");
+# insert into db.table (col1, col2, ...) select c1, c2, ... from 
http_stream("property1"="value1");
 
 # http_stream
 # (
diff --git a/versioned_docs/version-3.0/data-operate/import/load-data-format.md 
b/versioned_docs/version-3.0/data-operate/import/load-data-format.md
index a69d92c57db..2267aa4539d 100644
--- a/versioned_docs/version-3.0/data-operate/import/load-data-format.md
+++ b/versioned_docs/version-3.0/data-operate/import/load-data-format.md
@@ -62,7 +62,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
     -H "line_delimiter:\n" \
     -H "columns_delimiter:|" \
     -H "enclose:'" \
-    -H "escape:\" \
+    -H 'escape:\' \
     -H "skip_lines:2" \
     -T streamload_example.csv \
     -XPUT http://<fe_ip>:<fe_http_port>/api/testdb/test_streamload/_stream_load
@@ -70,7 +70,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.txt")
     INTO TABLE load_test
@@ -742,7 +742,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.parquet")
     INTO TABLE load_test
@@ -779,7 +779,7 @@ curl --location-trusted -u <doris_user>:<doris_password> \
 
 [Broker Load](./import-way/broker-load-manual.md)
 ```sql
-LOAD LABEL example_db.exmpale_label_1
+LOAD LABEL example_db.example_label_1
 (
     DATA INFILE("s3://your_bucket_name/your_file.orc")
     INTO TABLE load_test
diff --git a/versioned_docs/version-3.0/lakehouse/datalake-analytics/hudi.md 
b/versioned_docs/version-3.0/lakehouse/datalake-analytics/hudi.md
index c7e6783bce3..955981b97c9 100644
--- a/versioned_docs/version-3.0/lakehouse/datalake-analytics/hudi.md
+++ b/versioned_docs/version-3.0/lakehouse/datalake-analytics/hudi.md
@@ -78,7 +78,7 @@ Doris uses the parquet native reader to read the data files 
of the COW table, an
 |      numNodes=6                                                              
|
 |      hudiNativeReadSplits=717/810                                            
|
 ```
-Users can view the perfomace of Java SDK through 
[profile](../../admin-manual/fe/profile-action.md), for exmpale:
+Users can view the perfomace of Java SDK through 
[profile](../../admin-manual/fe/profile-action.md), for example:
 ```
 -  HudiJniScanner:  0ns
   -  FillBlockTime:  31.29ms
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD.md
index 1a89dcf461b..cc6df5c0eb0 100644
--- 
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD.md
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Manipulation-Statements/Load/BROKER-LOAD.md
@@ -157,7 +157,7 @@ WITH BROKER broker_name
 
 - `broker_properties`
 
-  Specifies the information required by the broker. This information is 
usually used by the broker to be able to access remote storage systems. Such as 
BOS or HDFS. See the [Broker](../../../../advanced/broker.md) documentation for 
specific information.
+  Specifies the information required by the broker. This information is 
usually used by the broker to be able to access remote storage systems. Such as 
BOS or HDFS. See the [Broker 
Load](../../../../data-operate/import/broker-load-manual) documentation for 
specific information.
 
   ```text
   (
@@ -249,7 +249,7 @@ WITH BROKER broker_name
        SET (
            k2 = tmp_k2 + 1,
            k3 = tmp_k3 + 1
-       )
+       ),
        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file-20*")
        INTO TABLE `my_table2`
        COLUMNS TERMINATED BY ","
@@ -450,7 +450,7 @@ WITH BROKER broker_name
         FORMAT AS "json"
         PROPERTIES(
           "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.city, $.code]"
+          "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
         )       
     )
     with HDFS (
@@ -476,7 +476,7 @@ WITH BROKER broker_name
         SET (id = id * 10)
         PROPERTIES(
           "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.code, $.city]"
+          "jsonpaths" = "[\"$.id\", \"$.code\", \"$.city\"]"
         )       
     )
     with HDFS (


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to