This is an automated email from the ASF dual-hosted git repository.
liugddx pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/seatunnel.git
The following commit(s) were added to refs/heads/dev by this push:
new 3aa4ae5e7 [Doc] Improve DB2 Source Vertica Source & DB2 Sink Vertica
Sink document (#5102)
3aa4ae5e7 is described below
commit 3aa4ae5e7c5ddfcb54afe80743f71953678840a0
Author: zhangchengming601 <[email protected]>
AuthorDate: Tue Jul 18 21:34:21 2023 +0800
[Doc] Improve DB2 Source Vertica Source & DB2 Sink Vertica Sink document
(#5102)
---
docs/en/connector-v2/sink/DB2.md | 171 ++++++++++++++++++++++++++++++++
docs/en/connector-v2/sink/Vertica.md | 173 +++++++++++++++++++++++++++++++++
docs/en/connector-v2/source/DB2.md | 155 +++++++++++++++++++++++++++++
docs/en/connector-v2/source/Vertica.md | 157 ++++++++++++++++++++++++++++++
4 files changed, 656 insertions(+)
diff --git a/docs/en/connector-v2/sink/DB2.md b/docs/en/connector-v2/sink/DB2.md
new file mode 100644
index 000000000..8f5a7285e
--- /dev/null
+++ b/docs/en/connector-v2/sink/DB2.md
@@ -0,0 +1,171 @@
+# DB2
+
+> JDBC DB2 Sink Connector
+
+## Support Those Engines
+
+> Spark<br/>
+> Flink<br/>
+> SeaTunnel Zeta<br/>
+
+## Key Features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [cdc](../../concept/connector-v2-features.md)
+
+> Use `Xa transactions` to ensure `exactly-once`. So only support
`exactly-once` for the database which is
+> support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
+
+## Description
+
+Write data through jdbc. Support Batch mode and Streaming mode, support
concurrent writing, support exactly-once
+semantics (using XA transaction guarantee).
+
+## Supported DataSource Info
+
+| Datasource | Supported Versions |
Driver | Url |
Maven |
+|------------|----------------------------------------------------------|--------------------------------|-----------------------------------|-----------------------------------------------------------------------|
+| DB2 | Different dependency version has different driver class. |
com.ibm.db2.jdbc.app.DB2Driver | jdbc:db2://127.0.0.1:50000/dbname |
[Download](https://mvnrepository.com/artifact/com.ibm.db2.jcc/db2jcc) |
+
+## Database Dependency
+
+> Please download the support list corresponding to 'Maven' and copy it to the
'$SEATNUNNEL_HOME/plugins/jdbc/lib/' working directory<br/>
+> For example DB2 datasource: cp db2-connector-java-xxx.jar
$SEATNUNNEL_HOME/plugins/jdbc/lib/
+
+## Data Type Mapping
+
+| DB2 Data type
| SeaTunnel Data type |
+|------------------------------------------------------------------------------------------------------|---------------------|---|
+| BOOLEAN
| BOOLEAN |
+| SMALLINT
| SHORT |
+| INT<br/>INTEGER<br/>
| INTEGER |
+| BIGINT
| LONG |
+| DECIMAL<br/>DEC<br/>NUMERIC<br/>NUM
| DECIMAL(38,18) |
+| REAL
| FLOAT |
+| FLOAT<br/>DOUBLE<br/>DOUBLE PRECISION<br/>DECFLOAT
| DOUBLE |
+| CHAR<br/>VARCHAR<br/>LONG
VARCHAR<br/>CLOB<br/>GRAPHIC<br/>VARGRAPHIC<br/>LONG VARGRAPHIC<br/>DBCLOB |
STRING |
+| BLOB
| BYTES |
+| DATE
| DATE |
+| TIME
| TIME |
+| TIMESTAMP
| TIMESTAMP |
+| ROWID<br/>XML
| Not supported yet |
+
+## Sink Options
+
+| Name | Type | Required | Default |
Description
|
+|-------------------------------------------|---------|----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| url | String | Yes | - |
The URL of the JDBC connection. Refer to a case:
jdbc:db2://127.0.0.1:50000/dbname
|
+| driver | String | Yes | - |
The jdbc class name used to connect to the remote data source,<br/> if you use
DB2 the value is `com.ibm.db2.jdbc.app.DB2Driver`.
|
+| user | String | No | - |
Connection instance user name
|
+| password | String | No | - |
Connection instance password
|
+| query | String | No | - |
Use this sql write upstream input datas to database. e.g `INSERT ...`,`query`
have the higher priority
|
+| database | String | No | - |
Use this `database` and `table-name` auto-generate sql and receive upstream
input datas write to database.<br/>This option is mutually exclusive with
`query` and has a higher priority.
|
+| table | String | No | - |
Use database and this table-name auto-generate sql and receive upstream input
datas write to database.<br/>This option is mutually exclusive with `query` and
has a higher priority. |
+| primary_keys | Array | No | - |
This option is used to support operations such as `insert`, `delete`, and
`update` when automatically generate sql.
|
+| support_upsert_by_query_primary_key_exist | Boolean | No | false |
Choose to use INSERT sql, UPDATE sql to process update events(INSERT,
UPDATE_AFTER) based on query primary key exists. This configuration is only
used when database unsupport upsert syntax. **Note**: that this method has low
performance |
+| connection_check_timeout_sec | Int | No | 30 |
The time in seconds to wait for the database operation used to validate the
connection to complete.
|
+| max_retries | Int | No | 0 |
The number of retries to submit failed (executeBatch)
|
+| batch_size | Int | No | 1000 |
For batch writing, when the number of buffered records reaches the number of
`batch_size` or the time reaches `batch_interval_ms`<br/>, the data will be
flushed into the database
|
+| batch_interval_ms | Int | No | 1000 |
For batch writing, when the number of buffers reaches the number of
`batch_size` or the time reaches `batch_interval_ms`, the data will be flushed
into the database
|
+| is_exactly_once | Boolean | No | false |
Whether to enable exactly-once semantics, which will use Xa transactions. If
on, you need to<br/>set `xa_data_source_class_name`.
|
+| generate_sink_sql | Boolean | No | false |
Generate sql statements based on the database table you want to write to
|
+| xa_data_source_class_name | String | No | - |
The xa data source class name of the database Driver, for example, DB2 is
`com.db2.cj.jdbc.Db2XADataSource`, and<br/>please refer to appendix for other
data sources
|
+| max_commit_attempts | Int | No | 3 |
The number of retries for transaction commit failures
|
+| transaction_timeout_sec | Int | No | -1 |
The timeout after the transaction is opened, the default is -1 (never timeout).
Note that setting the timeout may affect<br/>exactly-once semantics
|
+| auto_commit | Boolean | No | true |
Automatic transaction commit is enabled by default
|
+| common-options | | no | - |
Sink plugin common parameters, please refer to [Sink Common
Options](common-options.md) for details
|
+
+### Tips
+
+> If partition_column is not set, it will run in single concurrency, and if
partition_column is set, it will be executed in parallel according to the
concurrency of tasks.
+
+## Task Example
+
+### Simple:
+
+> This example defines a SeaTunnel synchronization task that automatically
generates data through FakeSource and sends it to JDBC Sink. FakeSource
generates a total of 16 rows of data (row.num=16), with each row having two
fields, name (string type) and age (int type). The final target table is
test_table will also be 16 rows of data in the table. Before run this job, you
need create database test and table test_table in your DB2. And if you have not
yet installed and deployed SeaTunnel, [...]
+
+```
+# Defining the runtime environment
+env {
+ # You can set flink configuration here
+ execution.parallelism = 1
+ job.mode = "BATCH"
+}
+
+source {
+ # This is a example source plugin **only for test and demonstrate the
feature source plugin**
+ FakeSource {
+ parallelism = 1
+ result_table_name = "fake"
+ row.num = 16
+ schema = {
+ fields {
+ name = "string"
+ age = "int"
+ }
+ }
+ }
+ # If you would like to get more information about how to configure seatunnel
and see full list of source plugins,
+ # please go to https://seatunnel.apache.org/docs/category/source-v2
+}
+
+transform {
+ # If you would like to get more information about how to configure seatunnel
and see full list of transform plugins,
+ # please go to https://seatunnel.apache.org/docs/category/transform-v2
+}
+
+sink {
+ jdbc {
+ url = "jdbc:db2://127.0.0.1:50000/dbname"
+ driver = "com.ibm.db2.jdbc.app.DB2Driver"
+ user = "root"
+ password = "123456"
+ query = "insert into test_table(name,age) values(?,?)"
+ }
+ # If you would like to get more information about how to configure seatunnel
and see full list of sink plugins,
+ # please go to https://seatunnel.apache.org/docs/category/sink-v2
+}
+```
+
+### Generate Sink SQL
+
+> This example not need to write complex sql statements, you can configure
the database name table name to automatically generate add statements for you
+
+```
+sink {
+ jdbc {
+ url = "jdbc:db2://127.0.0.1:50000/dbname"
+ driver = "com.ibm.db2.jdbc.app.DB2Driver"
+ user = "root"
+ password = "123456"
+ # Automatically generate sql statements based on database table names
+ generate_sink_sql = true
+ database = test
+ table = test_table
+ }
+}
+```
+
+### Exactly-once :
+
+> For accurate write scene we guarantee accurate once
+
+```
+sink {
+ jdbc {
+ url = "jdbc:db2://127.0.0.1:50000/dbname"
+ driver = "com.ibm.db2.jdbc.app.DB2Driver"
+
+ max_retries = 0
+ user = "root"
+ password = "123456"
+ query = "insert into test_table(name,age) values(?,?)"
+
+ is_exactly_once = "true"
+
+ xa_data_source_class_name = "com.db2.cj.jdbc.Db2XADataSource"
+ }
+}
+```
+
diff --git a/docs/en/connector-v2/sink/Vertica.md
b/docs/en/connector-v2/sink/Vertica.md
new file mode 100644
index 000000000..0db8571d5
--- /dev/null
+++ b/docs/en/connector-v2/sink/Vertica.md
@@ -0,0 +1,173 @@
+# Vertica
+
+> JDBC Vertica Sink Connector
+
+## Support Those Engines
+
+> Spark<br/>
+> Flink<br/>
+> SeaTunnel Zeta<br/>
+
+## Key Features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [cdc](../../concept/connector-v2-features.md)
+
+> Use `Xa transactions` to ensure `exactly-once`. So only support
`exactly-once` for the database which is
+> support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
+
+## Description
+
+Write data through jdbc. Support Batch mode and Streaming mode, support
concurrent writing, support exactly-once
+semantics (using XA transaction guarantee).
+
+## Supported DataSource Info
+
+| Datasource | Supported Versions |
Driver | Url |
Maven |
+|------------|----------------------------------------------------------|-------------------------|---------------------------------------|----------------------------------------------------------------------|
+| Vertica | Different dependency version has different driver class. |
com.vertica.jdbc.Driver | jdbc:vertica://localhost:5433/vertica |
[Download](https://www.vertica.com/download/vertica/client-drivers/) |
+
+## Database Dependency
+
+> Please download the support list corresponding to 'Maven' and copy it to the
'$SEATNUNNEL_HOME/plugins/jdbc/lib/' working directory<br/>
+> For example Vertica datasource: cp vertica-jdbc-xxx.jar
$SEATNUNNEL_HOME/plugins/jdbc/lib/
+
+## Data Type Mapping
+
+| Vertica Data type
|
SeaTunnel Data type
|
+|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
+| BIT(1)<br/>INT UNSIGNED
| BOOLEAN
|
+| TINYINT<br/>TINYINT UNSIGNED<br/>SMALLINT<br/>SMALLINT
UNSIGNED<br/>MEDIUMINT<br/>MEDIUMINT UNSIGNED<br/>INT<br/>INTEGER<br/>YEAR |
INT
|
+| INT UNSIGNED<br/>INTEGER UNSIGNED<br/>BIGINT
| BIGINT
|
+| BIGINT UNSIGNED
| DECIMAL(20,0)
|
+| DECIMAL(x,y)(Get the designated column's specified column size.<38)
| DECIMAL(x,y)
|
+| DECIMAL(x,y)(Get the designated column's specified column size.>38)
| DECIMAL(38,18)
|
+| DECIMAL UNSIGNED
| DECIMAL((Get the
designated column's specified column size)+1,<br/>(Gets the designated column's
number of digits to right of the decimal point.))) |
+| FLOAT<br/>FLOAT UNSIGNED
| FLOAT
|
+| DOUBLE<br/>DOUBLE UNSIGNED
| DOUBLE
|
+| CHAR<br/>VARCHAR<br/>TINYTEXT<br/>MEDIUMTEXT<br/>TEXT<br/>LONGTEXT<br/>JSON
| STRING
|
+| DATE
| DATE
|
+| TIME
| TIME
|
+| DATETIME<br/>TIMESTAMP
| TIMESTAMP
|
+|
TINYBLOB<br/>MEDIUMBLOB<br/>BLOB<br/>LONGBLOB<br/>BINARY<br/>VARBINAR<br/>BIT(n)
| BYTES
|
+| GEOMETRY<br/>UNKNOWN
| Not supported yet
|
+
+## Sink Options
+
+| Name | Type | Required | Default |
Description
|
+|-------------------------------------------|---------|----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| url | String | Yes | - |
The URL of the JDBC connection. Refer to a case:
jdbc:vertica://localhost:5433/vertica
|
+| driver | String | Yes | - |
The jdbc class name used to connect to the remote data source,<br/> if you use
Vertical the value is `com.vertica.jdbc.Driver`.
|
+| user | String | No | - |
Connection instance user name
|
+| password | String | No | - |
Connection instance password
|
+| query | String | No | - |
Use this sql write upstream input datas to database. e.g `INSERT ...`,`query`
have the higher priority
|
+| database | String | No | - |
Use this `database` and `table-name` auto-generate sql and receive upstream
input datas write to database.<br/>This option is mutually exclusive with
`query` and has a higher priority.
|
+| table | String | No | - |
Use database and this table-name auto-generate sql and receive upstream input
datas write to database.<br/>This option is mutually exclusive with `query` and
has a higher priority. |
+| primary_keys | Array | No | - |
This option is used to support operations such as `insert`, `delete`, and
`update` when automatically generate sql.
|
+| support_upsert_by_query_primary_key_exist | Boolean | No | false |
Choose to use INSERT sql, UPDATE sql to process update events(INSERT,
UPDATE_AFTER) based on query primary key exists. This configuration is only
used when database unsupport upsert syntax. **Note**: that this method has low
performance |
+| connection_check_timeout_sec | Int | No | 30 |
The time in seconds to wait for the database operation used to validate the
connection to complete.
|
+| max_retries | Int | No | 0 |
The number of retries to submit failed (executeBatch)
|
+| batch_size | Int | No | 1000 |
For batch writing, when the number of buffered records reaches the number of
`batch_size` or the time reaches `batch_interval_ms`<br/>, the data will be
flushed into the database
|
+| batch_interval_ms | Int | No | 1000 |
For batch writing, when the number of buffers reaches the number of
`batch_size` or the time reaches `batch_interval_ms`, the data will be flushed
into the database
|
+| is_exactly_once | Boolean | No | false |
Whether to enable exactly-once semantics, which will use Xa transactions. If
on, you need to<br/>set `xa_data_source_class_name`.
|
+| generate_sink_sql | Boolean | No | false |
Generate sql statements based on the database table you want to write to
|
+| xa_data_source_class_name | String | No | - |
The xa data source class name of the database Driver, for example, vertical is
`com.vertical.cj.jdbc.VerticalXADataSource`, and<br/>please refer to appendix
for other data sources
|
+| max_commit_attempts | Int | No | 3 |
The number of retries for transaction commit failures
|
+| transaction_timeout_sec | Int | No | -1 |
The timeout after the transaction is opened, the default is -1 (never timeout).
Note that setting the timeout may affect<br/>exactly-once semantics
|
+| auto_commit | Boolean | No | true |
Automatic transaction commit is enabled by default
|
+| common-options | | no | - |
Sink plugin common parameters, please refer to [Sink Common
Options](common-options.md) for details
|
+
+### Tips
+
+> If partition_column is not set, it will run in single concurrency, and if
partition_column is set, it will be executed in parallel according to the
concurrency of tasks.
+
+## Task Example
+
+### Simple:
+
+> This example defines a SeaTunnel synchronization task that automatically
generates data through FakeSource and sends it to JDBC Sink. FakeSource
generates a total of 16 rows of data (row.num=16), with each row having two
fields, name (string type) and age (int type). The final target table is
test_table will also be 16 rows of data in the table. Before run this job, you
need create database test and table test_table in your vertical. And if you
have not yet installed and deployed SeaTu [...]
+
+```
+# Defining the runtime environment
+env {
+ # You can set flink configuration here
+ execution.parallelism = 1
+ job.mode = "BATCH"
+}
+
+source {
+ # This is a example source plugin **only for test and demonstrate the
feature source plugin**
+ FakeSource {
+ parallelism = 1
+ result_table_name = "fake"
+ row.num = 16
+ schema = {
+ fields {
+ name = "string"
+ age = "int"
+ }
+ }
+ }
+ # If you would like to get more information about how to configure seatunnel
and see full list of source plugins,
+ # please go to https://seatunnel.apache.org/docs/category/source-v2
+}
+
+transform {
+ # If you would like to get more information about how to configure seatunnel
and see full list of transform plugins,
+ # please go to https://seatunnel.apache.org/docs/category/transform-v2
+}
+
+sink {
+ jdbc {
+ url = "jdbc:vertica://localhost:5433/vertica"
+ driver = "com.vertica.jdbc.Driver"
+ user = "root"
+ password = "123456"
+ query = "insert into test_table(name,age) values(?,?)"
+ }
+ # If you would like to get more information about how to configure seatunnel
and see full list of sink plugins,
+ # please go to https://seatunnel.apache.org/docs/category/sink-v2
+}
+```
+
+### Generate Sink SQL
+
+> This example not need to write complex sql statements, you can configure
the database name table name to automatically generate add statements for you
+
+```
+sink {
+ jdbc {
+ url = "jdbc:vertica://localhost:5433/vertica"
+ driver = "com.vertica.jdbc.Driver"
+ user = "root"
+ password = "123456"
+ # Automatically generate sql statements based on database table names
+ generate_sink_sql = true
+ database = test
+ table = test_table
+ }
+}
+```
+
+### Exactly-once :
+
+> For accurate write scene we guarantee accurate once
+
+```
+sink {
+ jdbc {
+ url = "jdbc:vertica://localhost:5433/vertica"
+ driver = "com.vertica.jdbc.Driver"
+
+ max_retries = 0
+ user = "root"
+ password = "123456"
+ query = "insert into test_table(name,age) values(?,?)"
+
+ is_exactly_once = "true"
+
+ xa_data_source_class_name = "com.vertical.cj.jdbc.VerticalXADataSource"
+ }
+}
+```
+
diff --git a/docs/en/connector-v2/source/DB2.md
b/docs/en/connector-v2/source/DB2.md
new file mode 100644
index 000000000..7ea91b716
--- /dev/null
+++ b/docs/en/connector-v2/source/DB2.md
@@ -0,0 +1,155 @@
+# DB2
+
+> JDBC DB2 Source Connector
+
+## Support Those Engines
+
+> Spark<br/>
+> Flink<br/>
+> SeaTunnel Zeta<br/>
+
+## Key Features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [column projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [x] [support user-defined split](../../concept/connector-v2-features.md)
+
+> supports query SQL and can achieve projection effect.
+
+## Description
+
+Read external data source data through JDBC.
+
+## Supported DataSource Info
+
+| Datasource | Supported versions |
Driver | Url |
Maven |
+|------------|----------------------------------------------------------|--------------------------------|-----------------------------------|-----------------------------------------------------------------------|
+| DB2 | Different dependency version has different driver class. |
com.ibm.db2.jdbc.app.DB2Driver | jdbc:db2://127.0.0.1:50000/dbname |
[Download](https://mvnrepository.com/artifact/com.ibm.db2.jcc/db2jcc) |
+
+## Database Dependency
+
+> Please download the support list corresponding to 'Maven' and copy it to the
'$SEATNUNNEL_HOME/plugins/jdbc/lib/' working directory<br/>
+> For example DB2 datasource: cp db2-connector-java-xxx.jar
$SEATNUNNEL_HOME/plugins/jdbc/lib/
+
+## Data Type Mapping
+
+| DB2 Data type
| SeaTunnel Data type |
+|------------------------------------------------------------------------------------------------------|---------------------|---|
+| BOOLEAN
| BOOLEAN |
+| SMALLINT
| SHORT |
+| INT<br/>INTEGER<br/>
| INTEGER |
+| BIGINT
| LONG |
+| DECIMAL<br/>DEC<br/>NUMERIC<br/>NUM
| DECIMAL(38,18) |
+| REAL
| FLOAT |
+| FLOAT<br/>DOUBLE<br/>DOUBLE PRECISION<br/>DECFLOAT
| DOUBLE |
+| CHAR<br/>VARCHAR<br/>LONG
VARCHAR<br/>CLOB<br/>GRAPHIC<br/>VARGRAPHIC<br/>LONG VARGRAPHIC<br/>DBCLOB |
STRING |
+| BLOB
| BYTES |
+| DATE
| DATE |
+| TIME
| TIME |
+| TIMESTAMP
| TIMESTAMP |
+| ROWID<br/>XML
| Not supported yet |
+
+## Source Options
+
+| Name | Type | Required | Default |
Description
|
+|------------------------------|--------|----------|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| url | String | Yes | - | The URL
of the JDBC connection. Refer to a case: jdbc:db2://127.0.0.1:50000/dbname
|
+| driver | String | Yes | - | The
jdbc class name used to connect to the remote data source,<br/> if you use db2
the value is `com.ibm.db2.jdbc.app.DB2Driver`.
|
+| user | String | No | - |
Connection instance user name
|
+| password | String | No | - |
Connection instance password
|
+| query | String | Yes | - | Query
statement
|
+| connection_check_timeout_sec | Int | No | 30 | The
time in seconds to wait for the database operation used to validate the
connection to complete
|
+| partition_column | String | No | - | The
column name for parallelism's partition, only support numeric type,Only support
numeric type primary key, and only can config one column.
|
+| partition_lower_bound | Long | No | - | The
partition_column min value for scan, if not set SeaTunnel will query database
get min value.
|
+| partition_upper_bound | Long | No | - | The
partition_column max value for scan, if not set SeaTunnel will query database
get max value.
|
+| partition_num | Int | No | job parallelism | The
number of partition count, only support positive integer. default value is job
parallelism
|
+| fetch_size | Int | No | 0 | For
queries that return a large number of objects,you can configure<br/> the row
fetch size used in the query toimprove performance by<br/> reducing the number
database hits required to satisfy the selection criteria.<br/> Zero means use
jdbc default value. |
+| common-options | | No | - | Source
plugin common parameters, please refer to [Source Common
Options](common-options.md) for details
|
+
+### Tips
+
+> If partition_column is not set, it will run in single concurrency, and if
partition_column is set, it will be executed in parallel according to the
concurrency of tasks.
+
+## Task Example
+
+### Simple:
+
+> This example queries type_bin 'table' 16 data in your test "database" in
single parallel and queries all of its fields. You can also specify which
fields to query for final output to the console.
+
+```
+# Defining the runtime environment
+env {
+ # You can set flink configuration here
+ execution.parallelism = 2
+ job.mode = "BATCH"
+}
+source{
+ Jdbc {
+ url = "jdbc:db2://127.0.0.1:50000/dbname"
+ driver = "com.ibm.db2.jdbc.app.DB2Driver"
+ connection_check_timeout_sec = 100
+ user = "root"
+ password = "123456"
+ query = "select * from table_xxx"
+ }
+}
+
+transform {
+ # If you would like to get more information about how to configure
seatunnel and see full list of transform plugins,
+ # please go to https://seatunnel.apache.org/docs/transform-v2/sql
+}
+
+sink {
+ Console {}
+}
+```
+
+### Parallel:
+
+> Read your query table in parallel with the shard field you configured and
the shard data You can do this if you want to read the whole table
+
+```
+source {
+ Jdbc {
+ url = "jdbc:db2://127.0.0.1:50000/dbname"
+ driver = "com.ibm.db2.jdbc.app.DB2Driver"
+ connection_check_timeout_sec = 100
+ user = "root"
+ password = "123456"
+ # Define query logic as required
+ query = "select * from type_bin"
+ # Parallel sharding reads fields
+ partition_column = "id"
+ # Number of fragments
+ partition_num = 10
+ }
+}
+```
+
+### Parallel Boundary:
+
+> It is more efficient to specify the data within the upper and lower bounds
of the query It is more efficient to read your data source according to the
upper and lower boundaries you configured
+
+```
+source {
+ Jdbc {
+ url = "jdbc:db2://127.0.0.1:50000/dbname"
+ driver = "com.ibm.db2.jdbc.app.DB2Driver"
+ connection_check_timeout_sec = 100
+ user = "root"
+ password = "123456"
+ # Define query logic as required
+ query = "select * from type_bin"
+ partition_column = "id"
+ # Read start boundary
+ partition_lower_bound = 1
+ # Read end boundary
+ partition_upper_bound = 500
+ partition_num = 10
+ }
+}
+```
+
diff --git a/docs/en/connector-v2/source/Vertica.md
b/docs/en/connector-v2/source/Vertica.md
new file mode 100644
index 000000000..66f18e7a4
--- /dev/null
+++ b/docs/en/connector-v2/source/Vertica.md
@@ -0,0 +1,157 @@
+# Vertica
+
+> JDBC Vertica Source Connector
+
+## Support Those Engines
+
+> Spark<br/>
+> Flink<br/>
+> SeaTunnel Zeta<br/>
+
+## Key Features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [column projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [x] [support user-defined split](../../concept/connector-v2-features.md)
+
+> supports query SQL and can achieve projection effect.
+
+## Description
+
+Read external data source data through JDBC.
+
+## Supported DataSource Info
+
+| Datasource | Supported versions |
Driver | Url |
Maven |
+|------------|----------------------------------------------------------|-------------------------|---------------------------------------|----------------------------------------------------------------------|
+| Vertica | Different dependency version has different driver class. |
com.vertica.jdbc.Driver | jdbc:vertica://localhost:5433/vertica |
[Download](https://www.vertica.com/download/vertica/client-drivers/) |
+
+## Database Dependency
+
+> Please download the support list corresponding to 'Maven' and copy it to the
'$SEATNUNNEL_HOME/plugins/jdbc/lib/' working directory<br/>
+> For example Vertica datasource: cp vertica-jdbc-xxx.jar
$SEATNUNNEL_HOME/plugins/jdbc/lib/
+
+## Data Type Mapping
+
+| Vertical Data type
|
SeaTunnel Data type
|
+|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
+| BIT
| BOOLEAN
|
+| TINYINT<br/>TINYINT UNSIGNED<br/>SMALLINT<br/>SMALLINT
UNSIGNED<br/>MEDIUMINT<br/>MEDIUMINT UNSIGNED<br/>INT<br/>INTEGER<br/>YEAR |
INT
|
+| INT UNSIGNED<br/>INTEGER UNSIGNED<br/>BIGINT
| LONG
|
+| BIGINT UNSIGNED
| DECIMAL(20,0)
|
+| DECIMAL(x,y)(Get the designated column's specified column size.<38)
| DECIMAL(x,y)
|
+| DECIMAL(x,y)(Get the designated column's specified column size.>38)
| DECIMAL(38,18)
|
+| DECIMAL UNSIGNED
| DECIMAL((Get the
designated column's specified column size)+1,<br/>(Gets the designated column's
number of digits to right of the decimal point.))) |
+| FLOAT<br/>FLOAT UNSIGNED
| FLOAT
|
+| DOUBLE<br/>DOUBLE UNSIGNED
| DOUBLE
|
+| CHAR<br/>VARCHAR<br/>TINYTEXT<br/>MEDIUMTEXT<br/>TEXT<br/>LONGTEXT<br/>JSON
| STRING
|
+| DATE
| DATE
|
+| TIME
| TIME
|
+| DATETIME<br/>TIMESTAMP
| TIMESTAMP
|
+|
TINYBLOB<br/>MEDIUMBLOB<br/>BLOB<br/>LONGBLOB<br/>BINARY<br/>VARBINAR<br/>BIT(n)
| BYTES
|
+| GEOMETRY<br/>UNKNOWN
| Not supported yet
|
+
+## Source Options
+
+| Name | Type | Required | Default |
Description
|
+|------------------------------|--------|----------|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| url | String | Yes | - | The URL
of the JDBC connection. Refer to a case: jdbc:vertica://localhost:5433/vertica
|
+| driver | String | Yes | - | The
jdbc class name used to connect to the remote data source,<br/> if you use
Vertica the value is `com.vertica.jdbc.Driver`.
|
+| user | String | No | - |
Connection instance user name
|
+| password | String | No | - |
Connection instance password
|
+| query | String | Yes | - | Query
statement
|
+| connection_check_timeout_sec | Int | No | 30 | The
time in seconds to wait for the database operation used to validate the
connection to complete
|
+| partition_column | String | No | - | The
column name for parallelism's partition, only support numeric type,Only support
numeric type primary key, and only can config one column.
|
+| partition_lower_bound | Long | No | - | The
partition_column min value for scan, if not set SeaTunnel will query database
get min value.
|
+| partition_upper_bound | Long | No | - | The
partition_column max value for scan, if not set SeaTunnel will query database
get max value.
|
+| partition_num | Int | No | job parallelism | The
number of partition count, only support positive integer. default value is job
parallelism
|
+| fetch_size | Int | No | 0 | For
queries that return a large number of objects,you can configure<br/> the row
fetch size used in the query toimprove performance by<br/> reducing the number
database hits required to satisfy the selection criteria.<br/> Zero means use
jdbc default value. |
+| common-options | | No | - | Source
plugin common parameters, please refer to [Source Common
Options](common-options.md) for details
|
+
+### Tips
+
+> If partition_column is not set, it will run in single concurrency, and if
partition_column is set, it will be executed in parallel according to the
concurrency of tasks.
+
+## Task Example
+
+### Simple:
+
+> This example queries type_bin 'table' 16 data in your test "database" in
single parallel and queries all of its fields. You can also specify which
fields to query for final output to the console.
+
+```
+# Defining the runtime environment
+env {
+ # You can set flink configuration here
+ execution.parallelism = 2
+ job.mode = "BATCH"
+}
+source{
+ Jdbc {
+ url = "jdbc:vertica://localhost:5433/vertica"
+ driver = "com.vertica.jdbc.Driver"
+ connection_check_timeout_sec = 100
+ user = "root"
+ password = "123456"
+ query = "select * from type_bin limit 16"
+ }
+}
+
+transform {
+ # If you would like to get more information about how to configure
seatunnel and see full list of transform plugins,
+ # please go to https://seatunnel.apache.org/docs/transform-v2/sql
+}
+
+sink {
+ Console {}
+}
+```
+
+### Parallel:
+
+> Read your query table in parallel with the shard field you configured and
the shard data You can do this if you want to read the whole table
+
+```
+source {
+ Jdbc {
+ url = "jdbc:vertica://localhost:5433/vertica"
+ driver = "com.vertica.jdbc.Driver"
+ connection_check_timeout_sec = 100
+ user = "root"
+ password = "123456"
+ # Define query logic as required
+ query = "select * from type_bin"
+ # Parallel sharding reads fields
+ partition_column = "id"
+ # Number of fragments
+ partition_num = 10
+ }
+}
+```
+
+### Parallel Boundary:
+
+> It is more efficient to specify the data within the upper and lower bounds
of the query It is more efficient to read your data source according to the
upper and lower boundaries you configured
+
+```
+source {
+ Jdbc {
+ url = "jdbc:vertica://localhost:5433/vertica"
+ driver = "com.vertica.jdbc.Driver"
+ connection_check_timeout_sec = 100
+ user = "root"
+ password = "123456"
+ # Define query logic as required
+ query = "select * from type_bin"
+ partition_column = "id"
+ # Read start boundary
+ partition_lower_bound = 1
+ # Read end boundary
+ partition_upper_bound = 500
+ partition_num = 10
+ }
+}
+```
+