This is an automated email from the ASF dual-hosted git repository.

wanghailin pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/seatunnel.git


The following commit(s) were added to refs/heads/dev by this push:
     new d296842c9c [Docs] Update concept related docs info (#7184)
d296842c9c is described below

commit d296842c9c27cb2b1ec5194dd68d2e900ece9d49
Author: tcodehuber <tcodehu...@gmail.com>
AuthorDate: Fri Jul 12 18:17:41 2024 +0800

    [Docs] Update concept related docs info (#7184)
---
 docs/en/concept/JobEnvConfig.md          | 14 ++++----
 docs/en/concept/config.md                | 58 ++++++++++++++++----------------
 docs/en/concept/connector-v2-features.md | 14 ++++----
 docs/en/concept/schema-feature.md        | 12 +++----
 docs/en/concept/speed-limit.md           |  2 +-
 docs/en/concept/sql-config.md            |  4 +--
 docs/zh/concept/JobEnvConfig.md          |  6 ++--
 docs/zh/concept/config.md                | 32 ++++++------------
 docs/zh/concept/connector-v2-features.md | 10 +++---
 docs/zh/concept/schema-feature.md        |  2 +-
 docs/zh/concept/speed-limit.md           |  2 +-
 docs/zh/concept/sql-config.md            |  4 +--
 12 files changed, 75 insertions(+), 85 deletions(-)

diff --git a/docs/en/concept/JobEnvConfig.md b/docs/en/concept/JobEnvConfig.md
index e96054bd96..77c924b68f 100644
--- a/docs/en/concept/JobEnvConfig.md
+++ b/docs/en/concept/JobEnvConfig.md
@@ -1,11 +1,11 @@
 # Job Env Config
 
-This document describes env configuration information, the common parameters 
can be used in all engines. In order to better distinguish between engine 
parameters, the additional parameters of other engine need to carry a prefix.
+This document describes env configuration information. The common parameters 
can be used in all engines. In order to better distinguish between engine 
parameters, the additional parameters of other engine need to carry a prefix.
 In flink engine, we use `flink.` as the prefix. In the spark engine, we do not 
use any prefixes to modify parameters, because the official spark parameters 
themselves start with `spark.`
 
 ## Common Parameter
 
-The following configuration parameters are common to all engines
+The following configuration parameters are common to all engines.
 
 ### job.name
 
@@ -13,11 +13,11 @@ This parameter configures the task name.
 
 ### jars
 
-Third-party packages can be loaded via `jars`, like 
`jars="file://local/jar1.jar;file://local/jar2.jar"`
+Third-party packages can be loaded via `jars`, like 
`jars="file://local/jar1.jar;file://local/jar2.jar"`.
 
 ### job.mode
 
-You can configure whether the task is in batch mode or stream mode through 
`job.mode`, like `job.mode = "BATCH"` or `job.mode = "STREAMING"`
+You can configure whether the task is in batch or stream mode through 
`job.mode`, like `job.mode = "BATCH"` or `job.mode = "STREAMING"`
 
 ### checkpoint.interval
 
@@ -47,11 +47,11 @@ you can set it to `CLIENT`. Please use `CLUSTER` mode as 
much as possible, becau
 
 Specify the method of encryption, if you didn't have the requirement for 
encrypting or decrypting config files, this option can be ignored.
 
-For more details, you can refer to the documentation 
[config-encryption-decryption](../connector-v2/Config-Encryption-Decryption.md)
+For more details, you can refer to the documentation [Config Encryption 
Decryption](../connector-v2/Config-Encryption-Decryption.md)
 
 ## Flink Engine Parameter
 
-Here are some SeaTunnel parameter names corresponding to the names in Flink, 
not all of them, please refer to the official [flink 
documentation](https://flink.apache.org/) for more.
+Here are some SeaTunnel parameter names corresponding to the names in Flink, 
not all of them. Please refer to the official [Flink 
Documentation](https://flink.apache.org/).
 
 |    Flink Configuration Name     |     SeaTunnel Configuration Name      |
 |---------------------------------|---------------------------------------|
@@ -62,4 +62,4 @@ Here are some SeaTunnel parameter names corresponding to the 
names in Flink, not
 
 ## Spark Engine Parameter
 
-Because spark configuration items have not been modified, they are not listed 
here, please refer to the official [spark 
documentation](https://spark.apache.org/).
+Because Spark configuration items have not been modified, they are not listed 
here, please refer to the official [Spark 
Documentation](https://spark.apache.org/).
diff --git a/docs/en/concept/config.md b/docs/en/concept/config.md
index a8c58bae2d..3c206587a7 100644
--- a/docs/en/concept/config.md
+++ b/docs/en/concept/config.md
@@ -5,24 +5,24 @@ sidebar_position: 2
 
 # Intro to config file
 
-In SeaTunnel, the most important thing is the Config file, through which users 
can customize their own data
+In SeaTunnel, the most important thing is the config file, through which users 
can customize their own data
 synchronization requirements to maximize the potential of SeaTunnel. So next, 
I will introduce you how to
-configure the Config file.
+configure the config file.
 
-The main format of the Config file is `hocon`, for more details of this format 
type you can refer to 
[HOCON-GUIDE](https://github.com/lightbend/config/blob/main/HOCON.md),
-BTW, we also support the `json` format, but you should know that the name of 
the config file should end with `.json`
+The main format of the config file is `hocon`, for more details you can refer 
to [HOCON-GUIDE](https://github.com/lightbend/config/blob/main/HOCON.md),
+BTW, we also support the `json` format, but you should keep in mind that the 
name of the config file should end with `.json`.
 
-We also support the `SQL` format, for details, please refer to the [SQL 
configuration](sql-config.md) file.
+We also support the `SQL` format, please refer to [SQL 
configuration](sql-config.md) for more details.
 
 ## Example
 
 Before you read on, you can find config file
-examples [here](https://github.com/apache/seatunnel/tree/dev/config) and in 
distribute package's
+examples [Here](https://github.com/apache/seatunnel/tree/dev/config) from the 
binary package's
 config directory.
 
-## Config file structure
+## Config File Structure
 
-The Config file will be similar to the one below.
+The config file is similar to the below one:
 
 ### hocon
 
@@ -125,12 +125,12 @@ sql = """ select * from "table" """
 
 ```
 
-As you can see, the Config file contains several sections: env, source, 
transform, sink. Different modules
-have different functions. After you understand these modules, you will 
understand how SeaTunnel works.
+As you can see, the config file contains several sections: env, source, 
transform, sink. Different modules
+have different functions. After you understand these modules, you will see how 
SeaTunnel works.
 
 ### env
 
-Used to add some engine optional parameters, no matter which engine (Spark or 
Flink), the corresponding
+Used to add some engine optional parameters, no matter which engine (Zeta, 
Spark or Flink), the corresponding
 optional parameters should be filled in here.
 
 Note that we have separated the parameters by engine, and for the common 
parameters, we can configure them as before.
@@ -140,9 +140,9 @@ For flink and spark engine, the specific configuration 
rules of their parameters
 
 ### source
 
-source is used to define where SeaTunnel needs to fetch data, and use the 
fetched data for the next step.
-Multiple sources can be defined at the same time. The supported source at now
-check [Source of SeaTunnel](../connector-v2/source). Each source has its own 
specific parameters to define how to
+Source is used to define where SeaTunnel needs to fetch data, and use the 
fetched data for the next step.
+Multiple sources can be defined at the same time. The supported source can be 
found
+in [Source of SeaTunnel](../connector-v2/source). Each source has its own 
specific parameters to define how to
 fetch data, and SeaTunnel also extracts the parameters that each source will 
use, such as
 the `result_table_name` parameter, which is used to specify the name of the 
data generated by the current
 source, which is convenient for follow-up used by other modules.
@@ -180,35 +180,35 @@ sink {
     fields = ["name", "age", "card"]
     username = "default"
     password = ""
-    source_table_name = "fake1"
+    source_table_name = "fake"
   }
 }
 ```
 
-Like source, transform has specific parameters that belong to each module. The 
supported source at now check.
-The supported transform at now check [Transform V2 of 
SeaTunnel](../transform-v2)
+Like source, transform has specific parameters that belong to each module. The 
supported transform can be found
+in [Transform V2 of SeaTunnel](../transform-v2)
 
 ### sink
 
 Our purpose with SeaTunnel is to synchronize data from one place to another, 
so it is critical to define how
 and where data is written. With the sink module provided by SeaTunnel, you can 
complete this operation quickly
-and efficiently. Sink and source are very similar, but the difference is 
reading and writing. So go check out
-our [supported sinks](../connector-v2/sink).
+and efficiently. Sink and source are very similar, but the difference is 
reading and writing. So please check out
+[Supported Sinks](../connector-v2/sink).
 
 ### Other
 
 You will find that when multiple sources and multiple sinks are defined, which 
data is read by each sink, and
-which is the data read by each transform? We use `result_table_name` and 
`source_table_name` two key
-configurations. Each source module will be configured with a 
`result_table_name` to indicate the name of the
+which is the data read by each transform? We introduce two key configurations 
called `result_table_name` and
+`source_table_name`. Each source module will be configured with a 
`result_table_name` to indicate the name of the
 data source generated by the data source, and other transform and sink modules 
can use `source_table_name` to
 refer to the corresponding data source name, indicating that I want to read 
the data for processing. Then
 transform, as an intermediate processing module, can use both 
`result_table_name` and `source_table_name`
-configurations at the same time. But you will find that in the above example 
Config, not every module is
+configurations at the same time. But you will find that in the above example 
config, not every module is
 configured with these two parameters, because in SeaTunnel, there is a default 
convention, if these two
 parameters are not configured, then the generated data from the last module of 
the previous node will be used.
 This is much more convenient when there is only one source.
 
-## Config variable substitution
+## Config Variable Substitution
 
 In config file we can define some variables and replace it in run time. **This 
is only support `hocon` format file**.
 
@@ -266,7 +266,7 @@ We can replace those parameters with this shell command:
 -i nameVal=abc 
 -i username=seatunnel=2.3.1 
 -i password='$a^b%c.d~e0*9(' 
--e local
+-m local
 ```
 
 Then the final submitted config is:
@@ -312,12 +312,12 @@ sink {
 ```
 
 Some Notes:
-- quota with `'` if the value has special character (like `(`)
-- if the replacement variables is in `"` or `'`, like `resName` and `nameVal`, 
you need add `"`
-- the value can't have space `' '`, like `-i jobName='this is a job name' `, 
this will be replaced to `job.name = "this"`
-- If you want to use dynamic parameters,you can use the following format: -i 
date=$(date +"%Y%m%d").
+- Quota with `'` if the value has special character such as `(`
+- If the replacement variables is in `"` or `'`, like `resName` and `nameVal`, 
you need add `"`
+- The value can't have space `' '`, like `-i jobName='this is a job name' `, 
this will be replaced to `job.name = "this"`
+- If you want to use dynamic parameters, you can use the following format: -i 
date=$(date +"%Y%m%d").
 
 ## What's More
 
-If you want to know the details of this format configuration, Please
+If you want to know the details of the format configuration, please
 see [HOCON](https://github.com/lightbend/config/blob/main/HOCON.md).
diff --git a/docs/en/concept/connector-v2-features.md 
b/docs/en/concept/connector-v2-features.md
index 7eb3cd4875..ad8433453f 100644
--- a/docs/en/concept/connector-v2-features.md
+++ b/docs/en/concept/connector-v2-features.md
@@ -1,9 +1,9 @@
 # Intro To Connector V2 Features
 
-## Differences Between Connector V2 And Connector v1
+## Differences Between Connector V2 And V1
 
 Since https://github.com/apache/seatunnel/issues/1608 We Added Connector V2 
Features.
-Connector V2 is a connector defined based on the SeaTunnel Connector API 
interface. Unlike Connector V1, Connector V2 supports the following features.
+Connector V2 is a connector defined based on the SeaTunnel Connector API 
interface. Unlike Connector V1, V2 supports the following features:
 
 * **Multi Engine Support** SeaTunnel Connector API is an engine independent 
API. The connectors developed based on this API can run in multiple engines. 
Currently, Flink and Spark are supported, and we will support other engines in 
the future.
 * **Multi Engine Version Support** Decoupling the connector from the engine 
through the translation layer solves the problem that most connectors need to 
modify the code in order to support a new version of the underlying engine.
@@ -18,23 +18,23 @@ Source connectors have some common core features, and each 
source connector supp
 
 If each piece of data in the data source will only be sent downstream by the 
source once, we think this source connector supports exactly once.
 
-In SeaTunnel, we can save the read **Split** and its **offset**(The position 
of the read data in split at that time,
-such as line number, byte size, offset, etc) as **StateSnapshot** when 
checkpoint. If the task restarted, we will get the last **StateSnapshot**
+In SeaTunnel, we can save the read **Split** and its **offset** (The position 
of the read data in split at that time,
+such as line number, byte size, offset, etc.) as **StateSnapshot** when 
checkpointing. If the task restarted, we will get the last **StateSnapshot**
 and then locate the **Split** and **offset** read last time and continue to 
send data downstream.
 
 For example `File`, `Kafka`.
 
 ### column projection
 
-If the connector supports reading only specified columns from the data source 
(note that if you read all columns first and then filter unnecessary columns 
through the schema, this method is not a real column projection)
+If the connector supports reading only specified columns from the data source 
(Note that if you read all columns first and then filter unnecessary columns 
through the schema, this method is not a real column projection)
 
-For example `JDBCSource` can use sql define read columns.
+For example `JDBCSource` can use sql to define reading columns.
 
 `KafkaSource` will read all content from topic and then use `schema` to filter 
unnecessary columns, This is not `column projection`.
 
 ### batch
 
-Batch Job Mode, The data read is bounded and the job will stop when all data 
read complete.
+Batch Job Mode, The data read is bounded and the job will stop after 
completing all data read.
 
 ### stream
 
diff --git a/docs/en/concept/schema-feature.md 
b/docs/en/concept/schema-feature.md
index 9ae2c3d39e..a448104fcf 100644
--- a/docs/en/concept/schema-feature.md
+++ b/docs/en/concept/schema-feature.md
@@ -1,13 +1,13 @@
 # Intro to schema feature
 
-## Why we need schema
+## Why We Need Schema
 
 Some NoSQL databases or message queue are not strongly limited schema, so the 
schema cannot be obtained through the api.
 At this time, a schema needs to be defined to convert to TableSchema and 
obtain data.
 
 ## SchemaOptions
 
-We can use SchemaOptions to define schema, the SchemaOptions contains some 
config to define the schema. e.g. columns, primaryKey, constraintKeys.
+We can use SchemaOptions to define schema, the SchemaOptions contains some 
configs to define the schema. e.g. columns, primaryKey, constraintKeys.
 
 ```
 schema = {
@@ -43,7 +43,7 @@ The comment of the CatalogTable which the schema belongs to.
 
 ### Columns
 
-Columns is a list of config used to define the column in schema, each column 
can contains name, type, nullable, defaultValue, comment field.
+Columns is a list of configs used to define the column in schema, each column 
can contains name, type, nullable, defaultValue, comment field.
 
 ```
 columns = [
@@ -80,13 +80,13 @@ columns = [
 | bigint    | `java.lang.Long`                                   | All numbers 
between -9,223,372,036,854,775,808 and 9,223,372,036,854,775,807 are allowed.   
                                                                                
                                                                                
                                                                                
                |
 | float     | `java.lang.Float`                                  | 
Float-precision numeric data from -1.79E+308 to 1.79E+308.                      
                                                                                
                                                                                
                                                                                
                            |
 | double    | `java.lang.Double`                                 | Double 
precision floating point. Handle most decimals.                                 
                                                                                
                                                                                
                                                                                
                     |
-| decimal   | `java.math.BigDecimal`                             | DOUBLE type 
stored as a string, allowing a fixed decimal point.                             
                                                                                
                                                                                
                                                                                
                |
+| decimal   | `java.math.BigDecimal`                             | Double type 
stored as a string, allowing a fixed decimal point.                             
                                                                                
                                                                                
                                                                                
                |
 | null      | `java.lang.Void`                                   | null        
                                                                                
                                                                                
                                                                                
                                                                                
                |
-| bytes     | `byte[]`                                           | bytes.      
                                                                                
                                                                                
                                                                                
                                                                                
                |
+| bytes     | `byte[]`                                           | bytes       
                                                                                
                                                                                
                                                                                
                                                                                
                |
 | date      | `java.time.LocalDate`                              | Only the 
date is stored. From January 1, 0001 to December 31, 9999.                      
                                                                                
                                                                                
                                                                                
                   |
 | time      | `java.time.LocalTime`                              | Only store 
time. Accuracy is 100 nanoseconds.                                              
                                                                                
                                                                                
                                                                                
                 |
 | timestamp | `java.time.LocalDateTime`                          | Stores a 
unique number that is updated whenever a row is created or modified. timestamp 
is based on the internal clock and does not correspond to real time. There can 
only be one timestamp variable per table.                                       
                                                                                
                     |
-| row       | `org.apache.seatunnel.api.table.type.SeaTunnelRow` | Row 
type,can be nested.                                                             
                                                                                
                                                                                
                                                                                
                        |
+| row       | `org.apache.seatunnel.api.table.type.SeaTunnelRow` | Row type, 
can be nested.                                                                  
                                                                                
                                                                                
                                                                                
                  |
 | map       | `java.util.Map`                                    | A Map is an 
object that maps keys to values. The key type includes `int` `string` `boolean` 
`tinyint` `smallint` `bigint` `float` `double` `decimal` `date` `time` 
`timestamp` `null` , and the value type includes `int` `string` `boolean` 
`tinyint` `smallint` `bigint` `float` `double` `decimal` `date` `time` 
`timestamp` `null` `array` `map` `row`. |
 | array     | `ValueType[]`                                      | A array is 
a data type that represents a collection of elements. The element type includes 
`int` `string` `boolean` `tinyint` `smallint` `bigint` `float` `double`.        
                                                                                
                                                                                
                 |
 
diff --git a/docs/en/concept/speed-limit.md b/docs/en/concept/speed-limit.md
index 4b7e7c03ca..87379e5b75 100644
--- a/docs/en/concept/speed-limit.md
+++ b/docs/en/concept/speed-limit.md
@@ -39,6 +39,6 @@ sink {
 }
 ```
 
-We have placed `read_limit.bytes_per_second` and `read_limit.rows_per_second` 
in the `env` parameters, completing the speed control configuration.
+We have placed `read_limit.bytes_per_second` and `read_limit.rows_per_second` 
in the `env` parameters to finish the speed control configuration.
 You can configure both of these parameters simultaneously or choose to 
configure only one of them. The value of each `value` represents the maximum 
rate at which each thread is restricted.
 Therefore, when configuring the respective values, please take into account 
the parallelism of your tasks.
diff --git a/docs/en/concept/sql-config.md b/docs/en/concept/sql-config.md
index c397ee03b7..fe148a6f72 100644
--- a/docs/en/concept/sql-config.md
+++ b/docs/en/concept/sql-config.md
@@ -2,7 +2,7 @@
 
 ## Structure of SQL Configuration File
 
-The `SQL` configuration file appears as follows.
+The `SQL` configuration file appears as follows:
 
 ### SQL
 
@@ -173,7 +173,7 @@ CREATE TABLE temp1 AS SELECT id, name, age, email FROM 
source_table;
 ```
 
 * This syntax creates a temporary table with the result of a `SELECT` query, 
used for `INSERT INTO` operations.
-* The syntax of the `SELECT` part refers to: 
[SQL-transform](../transform-v2/sql.md) `query` configuration item
+* The syntax of the `SELECT` part refers to: [SQL 
Transform](../transform-v2/sql.md) `query` configuration item
 
 ```sql
 CREATE TABLE temp1 AS SELECT id, name, age, email FROM source_table;
diff --git a/docs/zh/concept/JobEnvConfig.md b/docs/zh/concept/JobEnvConfig.md
index d70c82b216..c20797604f 100644
--- a/docs/zh/concept/JobEnvConfig.md
+++ b/docs/zh/concept/JobEnvConfig.md
@@ -48,11 +48,11 @@
 
 指定加密方式,如果您没有加密或解密配置文件的需求,此选项可以忽略。
 
-更多详细信息,您可以参考文档 
[config-encryption-decryption](../../en/connector-v2/Config-Encryption-Decryption.md)
+更多详细信息,您可以参考文档 [Config Encryption 
Decryption](../../en/connector-v2/Config-Encryption-Decryption.md)
 
 ## Flink 引擎参数
 
-这里列出了一些与 Flink 中名称相对应的 SeaTunnel 参数名称,并非全部,更多内容请参考官方 [flink 
documentation](https://flink.apache.org/) for more.
+这里列出了一些与 Flink 中名称相对应的 SeaTunnel 参数名称,并非全部,更多内容请参考官方 [Flink 
Documentation](https://flink.apache.org/) for more.
 
 |           Flink 配置名称            |            SeaTunnel 配置名称             |
 |---------------------------------|---------------------------------------|
@@ -63,5 +63,5 @@
 
 ## Spark 引擎参数
 
-由于spark配置项并无调整,这里就不列出来了,请参考官方 [spark documentation](https://spark.apache.org/).
+由于Spark配置项并无调整,这里就不列出来了,请参考官方 [Spark Documentation](https://spark.apache.org/).
 
diff --git a/docs/zh/concept/config.md b/docs/zh/concept/config.md
index 8f4368a67f..72c14bafce 100644
--- a/docs/zh/concept/config.md
+++ b/docs/zh/concept/config.md
@@ -5,21 +5,11 @@ sidebar_position: 2
 
 # 配置文件简介
 
-In SeaTunnel, the most important thing is the Config file, through which users 
can customize their own data
-synchronization requirements to maximize the potential of SeaTunnel. So next, 
I will introduce you how to
-configure the Config file.
-
-在SeaTunnel中,最重要的事情就是配置文件,尽管用户可以自定义他们自己的数据同步需求以发挥SeaTunnel最大的潜力。那么接下来,
-我将会向你介绍如何设置配置文件。
-
-The main format of the Config file is `hocon`, for more details of this format 
type you can refer to 
[HOCON-GUIDE](https://github.com/lightbend/config/blob/main/HOCON.md),
-BTW, we also support the `json` format, but you should know that the name of 
the config file should end with `.json`
+在SeaTunnel中,最重要的事情就是配置文件,尽管用户可以自定义他们自己的数据同步需求以发挥SeaTunnel最大的潜力。那么接下来我将会向你介绍如何设置配置文件。
 
 配置文件的主要格式是 `hocon`, 
有关该格式类型的更多信息你可以参考[HOCON-GUIDE](https://github.com/lightbend/config/blob/main/HOCON.md),
 顺便提一下,我们也支持 `json`格式,但你应该知道配置文件的名称应该是以 `.json`结尾。
 
-We also support the `SQL` format, for details, please refer to the [SQL 
configuration](sql-config.md) file.
-
 我们同时提供了以 `SQL` 格式,详细可以参考[SQL配置文件](sql-config.md)。
 
 ## 例子
@@ -28,7 +18,7 @@ We also support the `SQL` format, for details, please refer 
to the [SQL configur
 
 ## 配置文件结构
 
-配置文件类似下面。
+配置文件类似下面这个例子:
 
 ### hocon
 
@@ -131,14 +121,14 @@ sql = """ select * from "table" """
 
 ```
 
-正如你看到的,配置文件包括几个部分:env, source, transform, sink。不同的模块有不同的功能。
-当你了解了这些模块后,你就会懂得SeaTunnel如何工作。
+正如你看到的,配置文件包括几个部分:env, source, transform, sink。不同的模块具有不同的功能。
+当你了解了这些模块后,你就会懂得SeaTunnel到底是如何工作的。
 
 ### env
 
-用于添加引擎可选的参数,不管是什么引擎(Spark 或者 Flink),对应的可选参数应该在这里填写。
+用于添加引擎可选的参数,不管是什么引擎(Zeta、Spark 或者 Flink),对应的可选参数应该在这里填写。
 
-注意,我们按照引擎分离了参数,对于公共参数,我们可以像以前一样配置。对于Flink和Spark引擎,其参数的具体配置规则可以参考[JobEnvConfig](./JobEnvConfig.md)。
+注意,我们按照引擎分离了参数,对于公共参数我们可以像以前一样配置。对于Flink和Spark引擎,其参数的具体配置规则可以参考[JobEnvConfig](./JobEnvConfig.md)。
 
 <!-- TODO add supported env parameters -->
 
@@ -152,7 +142,7 @@ source用于定义SeaTunnel在哪儿检索数据,并将检索的数据用于
 ### transform
 
 当我们有了数据源之后,我们可能需要对数据进行进一步的处理,所以我们就有了transform模块。当然,这里使用了“可能”这个词,
-这意味着我们也可以直接将transform视为不存在,直接从source到sink。像下面这样。
+这意味着我们也可以直接将transform视为不存在,直接从source到sink,像下面这样:
 
 ```hocon
 env {
@@ -193,19 +183,19 @@ sink {
 ### sink
 
 我们使用SeaTunnel的作用是将数据从一个地方同步到其它地方,所以定义数据如何写入,写入到哪里是至关重要的。通过SeaTunnel提供的
-sink模块,你可以快速高效地完成这个操作。Sink和source非常相似,区别在于读取和写入。所以去看看我们[支持的sink](../../en/connector-v2/sink)吧。
+sink模块,你可以快速高效地完成这个操作。Sink和source非常相似,区别在于读取和写入。所以去看看我们[Sink of 
SeaTunnel](../../en/connector-v2/sink)吧。
 
 ### 其它
 
 
你会疑惑当定义了多个source和多个sink时,每个sink读取哪些数据,每个transform读取哪些数据?我们使用`result_table_name` 
和
-`source_table_name` 
两个键配置。每个source模块都会配置一个`result_table_name`来指示数据源生成的数据源名称,其它transform和sink
+`source_table_name` 
两个配置。每个source模块都会配置一个`result_table_name`来指示数据源生成的数据源名称,其它transform和sink
 模块可以使用`source_table_name` 引用相应的数据源名称,表示要读取数据进行处理。然后transform,作为一个中间的处理模块,可以同时使用
 `result_table_name` 和 `source_table_name` 
配置。但你会发现在上面的配置例子中,不是每个模块都配置了这些参数,因为在SeaTunnel中,
 有一个默认的约定,如果这两个参数没有配置,则使用上一个节点的最后一个模块生成的数据。当只有一个source时这是非常方便的。
 
 ## 配置变量替换
 
-在配置文件中,我们可以定义一些变量并在运行时替换它们。这仅支持 hocon 格式的文件。
+在配置文件中,我们可以定义一些变量并在运行时替换它们。但是注意仅支持 hocon 格式的文件。
 
 ```hocon
 env {
@@ -309,7 +299,7 @@ sink {
 
 一些注意事项:
 
-- 如果值包含特殊字符(如`(`),请使用`'`引号将其括起来。
+- 如果值包含特殊字符,如`(`,请使用`'`引号将其括起来。
 - 如果替换变量包含`"`或`'`(如`"resName"`和`"nameVal"`),需要添加`"`。
 - 值不能包含空格`' '`。例如, `-i jobName='this is a job name'`将被替换为`job.name = "this"`。
 - 如果要使用动态参数,可以使用以下格式: `-i date=$(date +"%Y%m%d")`。
diff --git a/docs/zh/concept/connector-v2-features.md 
b/docs/zh/concept/connector-v2-features.md
index 9708eb373d..77041e9532 100644
--- a/docs/zh/concept/connector-v2-features.md
+++ b/docs/zh/concept/connector-v2-features.md
@@ -1,9 +1,9 @@
 # Connector V2 功能简介
 
-## Connector V2 和 Connector V1 之间的不同
+## Connector V2 和 V1 之间的不同
 
 从 https://github.com/apache/seatunnel/issues/1608 我们添加了 Connector V2 特性。
-Connector V2 是基于SeaTunnel Connector API接口定义的连接器。不像Connector V1,Connector V2 
支持如下特性:
+Connector V2 是基于SeaTunnel Connector API接口定义的连接器。不像Connector V1, V2 支持如下特性:
 
 * **多引擎支持** SeaTunnel Connector API 
是引擎独立的API。基于这个API开发的连接器可以在多个引擎上运行。目前支持Flink和Spark引擎,后续我们会支持其它的引擎。
 * **多引擎版本支持** 通过翻译层将连接器与引擎解耦,解决了大多数连接器需要修改代码才能支持新版本底层引擎的问题。
@@ -18,7 +18,7 @@ Source connector有一些公共的核心特性,每个source connector在不同
 
 如果数据源中的每条数据仅由源向下游发送一次,我们认为该source connector支持精确一次(exactly-once)。
 
-在SeaTunnel中, 我们可以保存读取的 **Split** 和 它的 **offset**(当时读取的数据被分割时的位置,例如行号, 字节大小, 
偏移量等) 作为检查点时的 **StateSnapshot** 。 如果任务重新启动, 我们会得到最后的 **StateSnapshot**
+在SeaTunnel中, 我们可以保存读取的 **Split** 和它的 **offset**(当时读取的数据被分割时的位置,例如行号, 字节大小, 
偏移量等) 作为检查点时的 **StateSnapshot** 。 如果任务重新启动, 我们会得到最后的 **StateSnapshot**
 然后定位到上次读取的 **Split** 和 **offset**,继续向下游发送数据。
 
 例如 `File`, `Kafka`。
@@ -50,7 +50,7 @@ Source connector有一些公共的核心特性,每个source connector在不同
 
 ### 支持多表读取
 
-支持在一个 SeaTunnel 作业中读取多个表
+支持在一个 SeaTunnel 作业中读取多个表。
 
 ## Sink Connector 的特性
 
@@ -63,7 +63,7 @@ Sink connector有一些公共的核心特性,每个sink connector在不同程
 对于sink connector,如果任何数据只写入目标一次,则sink connector支持精确一次。 通常有两种方法可以实现这一目标:
 
 * 目标数据库支持key去重。例如 `MySQL`, `Kudu`。
-* 目标支持 **XA 事务**(事务可以跨会话使用。即使创建事务的程序已经结束,新启动的程序也只需要知道最后一个事务的ID就可以重新提交或回滚事务)。 
然后我们可以使用 **两阶段提交** 来确保 * 精确一次**。 例如:`File`, `MySQL`.
+* 目标支持 **XA 事务**(事务可以跨会话使用,即使创建事务的程序已经结束,新启动的程序也只需要知道最后一个事务的ID就可以重新提交或回滚事务)。 
然后我们可以使用 **两阶段提交** 来确保 * 精确一次**。 例如:`File`, `MySQL`.
 
 ### cdc(更改数据捕获,change data capture)
 
diff --git a/docs/zh/concept/schema-feature.md 
b/docs/zh/concept/schema-feature.md
index adb4089298..d719a7953e 100644
--- a/docs/zh/concept/schema-feature.md
+++ b/docs/zh/concept/schema-feature.md
@@ -80,7 +80,7 @@ columns = [
 | bigint    | `java.lang.Long`                                   | 允许 
-9,223,372,036,854,775,808 和 9,223,372,036,854,775,807 之间的所有数字。                 
                                                                                
                                                                                
                                                             |
 | float     | `java.lang.Float`                                  | 从-1.79E+308 
到 1.79E+308浮点精度数值数据。                                                            
                                                                                
                                                                                
                                                    |
 | double    | `java.lang.Double`                                 | 双精度浮点。 
处理大多数小数。                                                                        
                                                                                
                                                                                
                                                         |
-| decimal   | `java.math.BigDecimal`                             | DOUBLE 
类型存储为字符串,允许固定小数点。                                                               
                                                                                
                                                                                
                                                         |
+| decimal   | `java.math.BigDecimal`                             | Double 
类型存储为字符串,允许固定小数点。                                                               
                                                                                
                                                                                
                                                         |
 | null      | `java.lang.Void`                                   | null        
                                                                                
                                                                                
                                                                                
                                                    |
 | bytes     | `byte[]`                                           | 字节。         
                                                                                
                                                                                
                                                                                
                                                    |
 | date      | `java.time.LocalDate`                              | 
仅存储日期。从0001年1月1日到9999 年 12 月 31 日。                                              
                                                                                
                                                                                
                                                                |
diff --git a/docs/zh/concept/speed-limit.md b/docs/zh/concept/speed-limit.md
index cab8fc8bff..51007269dd 100644
--- a/docs/zh/concept/speed-limit.md
+++ b/docs/zh/concept/speed-limit.md
@@ -40,4 +40,4 @@ sink {
 
 我们在`env`参数中放了`read_limit.bytes_per_second` 和 
`read_limit.rows_per_second`来完成速度控制的配置。
 你可以同时配置这两个参数,或者只配置其中一个。每个`value`的值代表每个线程被限制的最大速率。
-因此,在配置各个值时,请考虑你任务的并行性。
+因此,在配置各个值时,还需要同时考虑你任务的并行性。
diff --git a/docs/zh/concept/sql-config.md b/docs/zh/concept/sql-config.md
index f20d1f5e2a..7defa0010b 100644
--- a/docs/zh/concept/sql-config.md
+++ b/docs/zh/concept/sql-config.md
@@ -2,7 +2,7 @@
 
 ## SQL配置文件结构
 
-`SQL`配置文件类似下面。
+`SQL`配置文件类似下面这样:
 
 ### SQL
 
@@ -173,7 +173,7 @@ CREATE TABLE temp1 AS SELECT id, name, age, email FROM 
source_table;
 ```
 
 * 该语法可以将一个`SELECT`查询结果作为一个临时表,用于的`INSERT INTO`操作
-* `SELECT` 部分的语法参考:[SQL-transform](../transform-v2/sql.md) `query` 配置项
+* `SELECT` 部分的语法参考:[SQL Transform](../transform-v2/sql.md) `query` 配置项
 
 ```sql
 CREATE TABLE temp1 AS SELECT id, name, age, email FROM source_table;


Reply via email to