This is an automated email from the ASF dual-hosted git repository.
lidongdai pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/seatunnel.git
The following commit(s) were added to refs/heads/dev by this push:
new 0f8e9855b9 [improve] repair doc dead link and error link (#10439)
0f8e9855b9 is described below
commit 0f8e9855b9e457ef131190308c671e73447b0018
Author: misi <[email protected]>
AuthorDate: Thu Mar 19 09:51:29 2026 +0800
[improve] repair doc dead link and error link (#10439)
Co-authored-by: misi <[email protected]>
---
docs/en/connectors/sink/DuckDB.md | 4 ++--
docs/en/connectors/sink/GraphQL.md | 6 +++---
docs/en/connectors/sink/Paimon.md | 2 +-
docs/en/connectors/source/DuckDB.md | 12 ++++++------
docs/en/connectors/source/HdfsFile.md | 22 +++++++++++-----------
docs/en/connectors/source/HiveJdbc.md | 2 +-
docs/en/connectors/source/Kafka.md | 2 +-
docs/en/connectors/source/Mysql.md | 2 +-
docs/en/connectors/source/PostgreSQL.md | 2 +-
docs/en/connectors/source/S3File.md | 4 ++--
docs/en/connectors/source/Snowflake.md | 2 +-
docs/en/engines/zeta/rest-api-v1.md | 2 +-
docs/en/engines/zeta/rest-api-v2.md | 2 +-
docs/en/faq.md | 14 +++++++-------
docs/en/getting-started/docker/docker.md | 4 ++--
docs/en/getting-started/kubernetes/helm.md | 6 +++---
.../getting-started/locally/quick-start-flink.md | 6 +++---
.../locally/quick-start-seatunnel-engine.md | 10 +++++-----
.../getting-started/locally/quick-start-spark.md | 6 +++---
docs/en/introduction/about.md | 6 +++---
docs/en/introduction/concepts/config.md | 6 +++---
docs/en/introduction/configuration/JobEnvConfig.md | 2 +-
.../introduction/configuration/schema-evolution.md | 22 +++++++++++-----------
docs/en/introduction/configuration/sql-config.md | 2 +-
docs/en/transforms/llm.md | 2 +-
docs/zh/connectors/sink/Assert.md | 2 +-
docs/zh/connectors/sink/DB2.md | 6 +++---
docs/zh/connectors/sink/DuckDB.md | 4 ++--
docs/zh/connectors/sink/GraphQL.md | 6 +++---
docs/zh/connectors/sink/HdfsFile.md | 6 +++---
docs/zh/connectors/sink/Kingbase.md | 6 +++---
docs/zh/connectors/sink/Mysql.md | 6 +++---
docs/zh/connectors/sink/OceanBase.md | 6 +++---
docs/zh/connectors/sink/Oracle.md | 6 +++---
docs/zh/connectors/sink/Paimon.md | 2 +-
docs/zh/connectors/sink/PostgreSql.md | 6 +++---
docs/zh/connectors/sink/S3File.md | 6 +++---
docs/zh/connectors/sink/Sls.md | 2 +-
docs/zh/connectors/sink/Snowflake.md | 6 +++---
docs/zh/connectors/sink/SqlServer.md | 8 ++++----
docs/zh/connectors/sink/Vertica.md | 6 +++---
docs/zh/connectors/source/AmazonSqs.md | 2 +-
docs/zh/connectors/source/DB2.md | 2 +-
docs/zh/connectors/source/DuckDB.md | 14 +++++++-------
docs/zh/connectors/source/GraphQL.md | 6 +++---
docs/zh/connectors/source/HdfsFile.md | 22 +++++++++++-----------
docs/zh/connectors/source/HiveJdbc.md | 2 +-
docs/zh/connectors/source/Kafka.md | 2 +-
docs/zh/connectors/source/Mysql.md | 2 +-
docs/zh/connectors/source/PostgreSQL.md | 2 +-
docs/zh/connectors/source/S3File.md | 4 ++--
docs/zh/connectors/source/Sls.md | 2 +-
docs/zh/connectors/source/Snowflake.md | 2 +-
docs/zh/connectors/source/SqlServer.md | 12 ++++++------
docs/zh/connectors/source/Vertica.md | 2 +-
docs/zh/engines/flink.md | 2 +-
docs/zh/engines/zeta/about.md | 2 +-
docs/zh/engines/zeta/rest-api-v1.md | 2 +-
docs/zh/engines/zeta/rest-api-v2.md | 2 +-
docs/zh/faq.md | 14 +++++++-------
docs/zh/getting-started/kubernetes/helm.md | 2 +-
.../getting-started/locally/quick-start-flink.md | 4 ++--
.../locally/quick-start-seatunnel-engine.md | 4 ++--
.../getting-started/locally/quick-start-spark.md | 4 ++--
.../introduction/configuration/schema-evolution.md | 22 +++++++++++-----------
docs/zh/introduction/configuration/sql-config.md | 4 ++--
docs/zh/transforms/llm.md | 2 +-
67 files changed, 187 insertions(+), 187 deletions(-)
diff --git a/docs/en/connectors/sink/DuckDB.md
b/docs/en/connectors/sink/DuckDB.md
index d3133197a5..42a372d341 100644
--- a/docs/en/connectors/sink/DuckDB.md
+++ b/docs/en/connectors/sink/DuckDB.md
@@ -31,8 +31,8 @@ semantics (using XA transaction guarantee).
## Key Features
-- [x] [exactly-once](../../concept/connector-v2-features.md)
-- [x] [cdc](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../introduction/concepts/connector-v2-features.md)
+- [x] [cdc](../../introduction/concepts/connector-v2-features.md)
> Use `Xa transactions` to ensure `exactly-once`. So only support
> `exactly-once` for the database which is
> support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
diff --git a/docs/en/connectors/sink/GraphQL.md
b/docs/en/connectors/sink/GraphQL.md
index df9f1bbee3..682eeb52b1 100644
--- a/docs/en/connectors/sink/GraphQL.md
+++ b/docs/en/connectors/sink/GraphQL.md
@@ -12,9 +12,9 @@ import ChangeLog from '../changelog/connector-graphql.md';
## Key Features
-- [ ] [exactly-once](../../concept/connector-v2-features.md)
-- [ ] [cdc](../../concept/connector-v2-features.md)
-- [x] [support multiple table write](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../introduction/concepts/connector-v2-features.md)
+- [ ] [cdc](../../introduction/concepts/connector-v2-features.md)
+- [x] [support multiple table
write](../../introduction/concepts/connector-v2-features.md)
## Description
diff --git a/docs/en/connectors/sink/Paimon.md
b/docs/en/connectors/sink/Paimon.md
index 2769dccf52..30c6bfdd6a 100644
--- a/docs/en/connectors/sink/Paimon.md
+++ b/docs/en/connectors/sink/Paimon.md
@@ -98,7 +98,7 @@ All `changelog-producer` modes are currently supported. The
default is `none`.
*
[`lookup`](https://paimon.apache.org/docs/master/primary-key-table/changelog-producer/#lookup)
*
[`full-compaction`](https://paimon.apache.org/docs/master/primary-key-table/changelog-producer/#full-compaction)
> note:
-> When you use a streaming mode to read paimon table,different mode will
produce [different
results](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/source/Paimon.md#changelog)。
+> When you use a streaming mode to read paimon table,different mode will
produce [different
results](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/source/Paimon.md#changelog)。
## Filesystems
The Paimon connector supports writing data to multiple file systems.
Currently, the supported file systems are hdfs and s3.
diff --git a/docs/en/connectors/source/DuckDB.md
b/docs/en/connectors/source/DuckDB.md
index 4543c8e854..cde250e493 100644
--- a/docs/en/connectors/source/DuckDB.md
+++ b/docs/en/connectors/source/DuckDB.md
@@ -30,12 +30,12 @@ Read external data source data through JDBC.
## Key Features
-- [x] [batch](../../concept/connector-v2-features.md)
-- [ ] [stream](../../concept/connector-v2-features.md)
-- [x] [exactly-once](../../concept/connector-v2-features.md)
-- [x] [column projection](../../concept/connector-v2-features.md)
-- [x] [parallelism](../../concept/connector-v2-features.md)
-- [x] [support user-defined split](../../concept/connector-v2-features.md)
+- [x] [batch](../../introduction/concepts/connector-v2-features.md)
+- [ ] [stream](../../introduction/concepts/connector-v2-features.md)
+- [x] [exactly-once](../../introduction/concepts/connector-v2-features.md)
+- [x] [column projection](../../introduction/concepts/connector-v2-features.md)
+- [x] [parallelism](../../introduction/concepts/connector-v2-features.md)
+- [x] [support user-defined
split](../../introduction/concepts/connector-v2-features.md)
> supports query SQL and can achieve projection effect.
diff --git a/docs/en/connectors/source/HdfsFile.md
b/docs/en/connectors/source/HdfsFile.md
index 6b7fbfb7ef..34936be0d6 100644
--- a/docs/en/connectors/source/HdfsFile.md
+++ b/docs/en/connectors/source/HdfsFile.md
@@ -12,20 +12,20 @@ import ChangeLog from
'../changelog/connector-file-hadoop.md';
## Key Features
-- [x] [batch](../../concept/connector-v2-features.md)
-- [ ] [stream](../../concept/connector-v2-features.md)
-- [x] [multimodal](../../concept/connector-v2-features.md#multimodal)
+- [x] [batch](../../introduction/concepts/connector-v2-features.md)
+- [ ] [stream](../../introduction/concepts/connector-v2-features.md)
+- [x]
[multimodal](../../introduction/concepts/connector-v2-features.md#multimodal)
Use binary file format to read and write files in any format, such as
videos, pictures, etc. In short, any files can be synchronized to the target
place.
-- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../introduction/concepts/connector-v2-features.md)
Read all the data in a split in a pollNext call. What splits are read will
be saved in snapshot.
-- [x] [column projection](../../concept/connector-v2-features.md)
-- [x] [parallelism](../../concept/connector-v2-features.md)
-- [ ] [support user-defined split](../../concept/connector-v2-features.md)
-- [x] [support multiple table read](../../concept/connector-v2-features.md)
+- [x] [column projection](../../introduction/concepts/connector-v2-features.md)
+- [x] [parallelism](../../introduction/concepts/connector-v2-features.md)
+- [ ] [support user-defined
split](../../introduction/concepts/connector-v2-features.md)
+- [x] [support multiple table
read](../../introduction/concepts/connector-v2-features.md)
- [x] file format file
- [x] text
- [x] csv
@@ -319,12 +319,12 @@ source {
fs.defaultFS = "hdfs://namenode001"
}
# If you would like to get more information about how to configure seatunnel
and see full list of source plugins,
- # please go to https://seatunnel.apache.org/docs/connector-v2/source
+ # please go to https://seatunnel.apache.org/docs/connectors/source
}
transform {
# If you would like to get more information about how to configure seatunnel
and see full list of transform plugins,
- # please go to https://seatunnel.apache.org/docs/transform-v2
+ # please go to https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -334,7 +334,7 @@ sink {
file_format_type = "orc"
}
# If you would like to get more information about how to configure seatunnel
and see full list of sink plugins,
- # please go to https://seatunnel.apache.org/docs/connector-v2/sink
+ # please go to https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/en/connectors/source/HiveJdbc.md
b/docs/en/connectors/source/HiveJdbc.md
index a050cd9c4d..27986c6235 100644
--- a/docs/en/connectors/source/HiveJdbc.md
+++ b/docs/en/connectors/source/HiveJdbc.md
@@ -117,7 +117,7 @@ source{
transform {
# If you would like to get more information about how to configure
seatunnel and see full list of transform plugins,
- # please go to https://seatunnel.apache.org/docs/transform-v2/sql
+ # please go to https://seatunnel.apache.org/docs/transforms/sql
}
sink {
diff --git a/docs/en/connectors/source/Kafka.md
b/docs/en/connectors/source/Kafka.md
index 9b88cf7de2..9557d07435 100644
--- a/docs/en/connectors/source/Kafka.md
+++ b/docs/en/connectors/source/Kafka.md
@@ -78,7 +78,7 @@ Only the data of the `test.public.products` table will be
consumed.
## Metadata Support
-The Kafka source automatically injects `ConsumerRecord.timestamp` into the
SeaTunnel `EventTime` metadata when the value is non-negative. You can expose
it as a normal field through the [Metadata
transform](../../transform-v2/metadata.md) for downstream SQL or partitioning.
+The Kafka source automatically injects `ConsumerRecord.timestamp` into the
SeaTunnel `EventTime` metadata when the value is non-negative. You can expose
it as a normal field through the [Metadata
transform](../../transforms/metadata.md) for downstream SQL or partitioning.
```hocon
source {
diff --git a/docs/en/connectors/source/Mysql.md
b/docs/en/connectors/source/Mysql.md
index 2034b6de83..575da1a528 100644
--- a/docs/en/connectors/source/Mysql.md
+++ b/docs/en/connectors/source/Mysql.md
@@ -204,7 +204,7 @@ source{
transform {
# If you would like to get more information about how to configure
seatunnel and see full list of transform plugins,
- # please go to https://seatunnel.apache.org/docs/transform-v2/sql
+ # please go to https://seatunnel.apache.org/docs/transforms/sql
}
sink {
diff --git a/docs/en/connectors/source/PostgreSQL.md
b/docs/en/connectors/source/PostgreSQL.md
index 9b00450ef3..f73855b7d1 100644
--- a/docs/en/connectors/source/PostgreSQL.md
+++ b/docs/en/connectors/source/PostgreSQL.md
@@ -189,7 +189,7 @@ source{
}
transform {
- # please go to https://seatunnel.apache.org/docs/transform-v2/sql
+ # please go to https://seatunnel.apache.org/docs/transforms/sql
}
sink {
diff --git a/docs/en/connectors/source/S3File.md
b/docs/en/connectors/source/S3File.md
index 6446dc224a..ada01496dd 100644
--- a/docs/en/connectors/source/S3File.md
+++ b/docs/en/connectors/source/S3File.md
@@ -431,7 +431,7 @@ source {
transform {
# If you would like to get more information about how to configure seatunnel
and see full list of transform plugins,
- # please go to https://seatunnel.apache.org/docs/transform-v2
+ # please go to https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -493,7 +493,7 @@ source {
transform {
# If you would like to get more information about how to configure seatunnel
and see full list of transform plugins,
- # please go to https://seatunnel.apache.org/docs/transform-v2
+ # please go to https://seatunnel.apache.org/docs/transforms
}
sink {
diff --git a/docs/en/connectors/source/Snowflake.md
b/docs/en/connectors/source/Snowflake.md
index c2fa28b2eb..de61f18d23 100644
--- a/docs/en/connectors/source/Snowflake.md
+++ b/docs/en/connectors/source/Snowflake.md
@@ -104,7 +104,7 @@ Read external data source data through JDBC.
}
transform {
# If you would like to get more information about how to configure seatunnel
and see full list of transform plugins,
- # please go to https://seatunnel.apache.org/docs/transform-v2/sql
+ # please go to https://seatunnel.apache.org/docs/transforms/sql
}
sink {
Console {}
diff --git a/docs/en/engines/zeta/rest-api-v1.md
b/docs/en/engines/zeta/rest-api-v1.md
index 51fd831f7f..f4d750fc65 100644
--- a/docs/en/engines/zeta/rest-api-v1.md
+++ b/docs/en/engines/zeta/rest-api-v1.md
@@ -672,7 +672,7 @@ When we can't get the job info, the response will be:
<details>
<summary><code>POST</code>
<code><b>/hazelcast/rest/maps/encrypt-config</b></code> <code>(Returns the
encrypted config if config is encrypted successfully.)</code></summary>
-For more information about customize encryption, please refer to the
documentation
[config-encryption-decryption](../connector-v2/Config-Encryption-Decryption.md).
+For more information about customize encryption, please refer to the
documentation
[config-encryption-decryption](../../introduction/concepts/config-encryption-decryption.md).
#### Body
diff --git a/docs/en/engines/zeta/rest-api-v2.md
b/docs/en/engines/zeta/rest-api-v2.md
index 88c68c00d1..2badb19237 100644
--- a/docs/en/engines/zeta/rest-api-v2.md
+++ b/docs/en/engines/zeta/rest-api-v2.md
@@ -936,7 +936,7 @@ curl --location 'http://127.0.0.1:8080/submit-job/upload'
--form 'config_file=@"
<details>
<summary><code>POST</code> <code><b>/encrypt-config</b></code> <code>(Returns
the encrypted config if config is encrypted successfully.)</code></summary>
-For more information about customize encryption, please refer to the
documentation
[config-encryption-decryption](../connector-v2/Config-Encryption-Decryption.md).
+For more information about customize encryption, please refer to the
documentation
[config-encryption-decryption](../../introduction/concepts/config-encryption-decryption.md).
#### Body
diff --git a/docs/en/faq.md b/docs/en/faq.md
index 4d941e3401..b6729f8bc4 100644
--- a/docs/en/faq.md
+++ b/docs/en/faq.md
@@ -2,8 +2,8 @@
## What data sources and destinations does SeaTunnel support?
SeaTunnel supports various data sources and destinations. You can find a
detailed list on the following list:
-- Supported data sources (Source): [Source
List](https://seatunnel.apache.org/docs/connector-v2/source)
-- Supported data destinations (Sink): [Sink
List](https://seatunnel.apache.org/docs/connector-v2/sink)
+- Supported data sources (Source): [Source
List](https://seatunnel.apache.org/docs/connectors/source)
+- Supported data destinations (Sink): [Sink
List](https://seatunnel.apache.org/docs/connectors/sink)
## Does SeaTunnel support batch and streaming processing?
SeaTunnel supports both batch and streaming processing modes. You can select
the appropriate mode based on your specific business scenarios and needs. Batch
processing is suitable for scheduled data integration tasks, while streaming
processing is ideal for real-time integration and Change Data Capture (CDC).
@@ -12,7 +12,7 @@ SeaTunnel supports both batch and streaming processing modes.
You can select the
Spark and Flink are not mandatory. SeaTunnel supports Zeta, Spark, and Flink
as integration engines, allowing you to choose one based on your needs. The
community highly recommends Zeta, a new generation high-performance integration
engine specifically designed for integration scenarios. Zeta is affectionately
called "Ultraman Zeta" by community users! The community offers extensive
support for Zeta, making it the most feature-rich option.
## What data transformation functions does SeaTunnel provide?
-SeaTunnel supports multiple data transformation functions, including field
mapping, data filtering, data format conversion, and more. You can implement
data transformations through the `transform` module in the configuration file.
For more details, refer to the SeaTunnel [Transform
Documentation](https://seatunnel.apache.org/docs/transform-v2).
+SeaTunnel supports multiple data transformation functions, including field
mapping, data filtering, data format conversion, and more. You can implement
data transformations through the `transform` module in the configuration file.
For more details, refer to the SeaTunnel [Transform
Documentation](https://seatunnel.apache.org/docs/transforms).
## Can SeaTunnel support custom data cleansing rules?
Yes, SeaTunnel supports custom data cleansing rules. You can configure custom
rules in the `transform` module, such as cleaning up dirty data, removing
invalid records, or converting fields.
@@ -21,7 +21,7 @@ Yes, SeaTunnel supports custom data cleansing rules. You can
configure custom ru
SeaTunnel supports incremental data integration. For example, the CDC
connector allows real-time capture of data changes, which is ideal for
scenarios requiring real-time data integration.
## What CDC data sources are currently supported by SeaTunnel?
-SeaTunnel currently supports MongoDB CDC, MySQL CDC, OpenGauss CDC, Oracle
CDC, PostgreSQL CDC, SQL Server CDC, TiDB CDC, and more. For more details,
refer to the [Source
List](https://seatunnel.apache.org/docs/connector-v2/source).
+SeaTunnel currently supports MongoDB CDC, MySQL CDC, OpenGauss CDC, Oracle
CDC, PostgreSQL CDC, SQL Server CDC, TiDB CDC, and more. For more details,
refer to the [Source List](https://seatunnel.apache.org/docs/connectors/source).
## How do I enable permissions required for SeaTunnel CDC integration?
Please refer to the official SeaTunnel documentation for the necessary steps
to enable permissions for each connector’s CDC functionality.
@@ -38,7 +38,7 @@ Before starting an integration task, you can select different
handling schemes f
- **`CREATE_SCHEMA_WHEN_NOT_EXIST`**: Creates the table if it does not exist;
skips creation if the table already exists.
- **`ERROR_WHEN_SCHEMA_NOT_EXIST`**: Throws an error if the table does not
exist.
- **`IGNORE`**: Ignores table handling.
- Many connectors currently support automatic table creation. Refer to the
specific connector documentation, such as [Jdbc
sink](https://seatunnel.apache.org/docs/connector-v2/sink/Jdbc/#schema_save_mode-enum),
for more information.
+ Many connectors currently support automatic table creation. Refer to the
specific connector documentation, such as [Jdbc
sink](https://seatunnel.apache.org/docs/connectors/sink/Jdbc/#schema_save_mode-enum),
for more information.
## Does SeaTunnel support handling existing data before starting a data
integration task?
Yes, you can specify different processing schemes for existing data on the
target side before starting an integration task, controlled via the
`data_save_mode` parameter. Available options include:
@@ -46,7 +46,7 @@ Yes, you can specify different processing schemes for
existing data on the targe
- **`APPEND_DATA`**: Retains both the database structure and data.
- **`CUSTOM_PROCESSING`**: User-defined processing.
- **`ERROR_WHEN_DATA_EXISTS`**: Throws an error if data already exists.
- Many connectors support handling existing data; please refer to the
respective connector documentation, such as [Jdbc
sink](https://seatunnel.apache.org/docs/connector-v2/sink/Jdbc#data_save_mode-enum).
+ Many connectors support handling existing data; please refer to the
respective connector documentation, such as [Jdbc
sink](https://seatunnel.apache.org/docs/connectors/sink/Jdbc#data_save_mode-enum).
## Does SeaTunnel support exactly-once consistency?
SeaTunnel supports exactly-once consistency for some data sources, such as
MySQL and PostgreSQL, ensuring data consistency during integration. Note that
exactly-once consistency depends on the capabilities of the underlying database.
@@ -84,7 +84,7 @@ $SEATUNNEL_HOME/bin/seatunnel.sh \
-i date=20231110
```
-Use the `-i` or `--variable` parameter with `key=value` to specify the
variable's value, where `key` matches the variable name in the configuration.
For details, see: [SeaTunnel Variable
Configuration](https://seatunnel.apache.org/docs/concept/config)
+Use the `-i` or `--variable` parameter with `key=value` to specify the
variable's value, where `key` matches the variable name in the configuration.
For details, see: [SeaTunnel Variable
Configuration](https://seatunnel.apache.org/docs/introduction/concepts/config)
## How can I write multi-line text in the configuration file?
If the text is long and needs to be wrapped, you can use triple quotes to
indicate the beginning and end:
diff --git a/docs/en/getting-started/docker/docker.md
b/docs/en/getting-started/docker/docker.md
index 8e87fe9cc8..43d5b0f052 100644
--- a/docs/en/getting-started/docker/docker.md
+++ b/docs/en/getting-started/docker/docker.md
@@ -397,11 +397,11 @@ docker run --name seatunnel_client \
./bin/seatunnel.sh -l
```
-more command please refer
[user-command](../../seatunnel-engine/user-command.md)
+more command please refer [user-command](../../engines/zeta/user-command.md)
#### use rest api
-please refer [Submit A Job](../../seatunnel-engine/rest-api-v2.md#submit-a-job)
+please refer [Submit A Job](../../engines/zeta/rest-api-v2.md#submit-a-job)
diff --git a/docs/en/getting-started/kubernetes/helm.md
b/docs/en/getting-started/kubernetes/helm.md
index 39bd643476..299dd628b6 100644
--- a/docs/en/getting-started/kubernetes/helm.md
+++ b/docs/en/getting-started/kubernetes/helm.md
@@ -72,9 +72,9 @@ curl http://127.0.0.1:5801/running-jobs
curl http://127.0.0.1:5801/system-monitoring-information
```
-After that you can submit your job by
[rest-api-v2](../../seatunnel-engine/rest-api-v2.md)
+After that you can submit your job by
[rest-api-v2](../../engines/zeta/rest-api-v2.md)
## What's More
-For now, you have taken a quick look at SeaTunnel, and you can see
[connector](../../connector-v2/source) to find all sources and sinks SeaTunnel
supported.
-Or see [deployment](../../seatunnel-engine/deployment.md) if you want to
submit your application in another kind of your engine cluster.
+For now, you have taken a quick look at SeaTunnel, and you can see
[connector](../../connectors/source) to find all sources and sinks SeaTunnel
supported.
+Or see [deployment](../../engines/zeta/deployment.md) if you want to submit
your application in another kind of your engine cluster.
diff --git a/docs/en/getting-started/locally/quick-start-flink.md
b/docs/en/getting-started/locally/quick-start-flink.md
index fbfc945fc7..62a5081452 100644
--- a/docs/en/getting-started/locally/quick-start-flink.md
+++ b/docs/en/getting-started/locally/quick-start-flink.md
@@ -57,7 +57,7 @@ sink {
```
-More information about config please check [Config
Concept](../../concept/config.md)
+More information about config please check [Config
Concept](../../introduction/concepts/config.md)
## Step 4: Run SeaTunnel Application
@@ -105,7 +105,7 @@ row=16 : SGZCr, 94186144
## What's More
-- Start write your own config file now, choose the
[connector](../../connector-v2/source) you want to use, and configure the
parameters according to the connector's documentation.
-- See [SeaTunnel With Flink](../../other-engine/flink.md) if you want to know
more about SeaTunnel With Flink.
+- Start write your own config file now, choose the
[connector](../../connectors/source) you want to use, and configure the
parameters according to the connector's documentation.
+- See [SeaTunnel With Flink](../../engines/flink.md) if you want to know more
about SeaTunnel With Flink.
- SeaTunnel have a builtin engine named `Zeta`, and it's the default engine of
SeaTunnel. You can follow [Quick Start](quick-start-seatunnel-engine.md) to
configure and run a data synchronization job.
diff --git a/docs/en/getting-started/locally/quick-start-seatunnel-engine.md
b/docs/en/getting-started/locally/quick-start-seatunnel-engine.md
index fe9d8ee798..f83de75176 100644
--- a/docs/en/getting-started/locally/quick-start-seatunnel-engine.md
+++ b/docs/en/getting-started/locally/quick-start-seatunnel-engine.md
@@ -51,7 +51,7 @@ sink {
```
-More information can be found in [Config Concept](../../concept/config.md)
+More information can be found in [Config
Concept](../../introduction/concepts/config.md)
## Step 3: Run SeaTunnel Application
@@ -157,7 +157,7 @@ sink {
}
```
-For more information about the configuration, please refer to [Basic Concepts
of Configuration](../../concept/config.md).
+For more information about the configuration, please refer to [Basic Concepts
of Configuration](../../introduction/concepts/config.md).
### Step 4: Run the SeaTunnel Application
@@ -188,13 +188,13 @@ Total Failed Count : 0
:::tip
-If you want to optimize your job, refer to the connector documentation for
[Source-MySQL](../../connector-v2/source/Mysql.md) and
[Sink-Doris](../../connector-v2/sink/Doris.md).
+If you want to optimize your job, refer to the connector documentation for
[Source-MySQL](../../connectors/source/Mysql.md) and
[Sink-Doris](../../connectors/sink/Doris.md).
:::
## What's More
-- Start write your own config file now, choose the
[connector](../../connector-v2/source) you want to use, and configure the
parameters according to the connector's documentation.
-- See [SeaTunnel Engine(Zeta)](../../seatunnel-engine/about.md) if you want to
know more about SeaTunnel Engine. Here you will learn how to deploy SeaTunnel
Engine and how to use it in cluster mode.
+- Start write your own config file now, choose the
[connector](../../connectors/source) you want to use, and configure the
parameters according to the connector's documentation.
+- See [SeaTunnel Engine(Zeta)](../../engines/zeta/about.md) if you want to
know more about SeaTunnel Engine. Here you will learn how to deploy SeaTunnel
Engine and how to use it in cluster mode.
diff --git a/docs/en/getting-started/locally/quick-start-spark.md
b/docs/en/getting-started/locally/quick-start-spark.md
index e490f238b3..9796fc49e7 100644
--- a/docs/en/getting-started/locally/quick-start-spark.md
+++ b/docs/en/getting-started/locally/quick-start-spark.md
@@ -58,7 +58,7 @@ sink {
```
-More information about config please check [Config
Concept](../../concept/config.md)
+More information about config please check [Config
Concept](../../introduction/concepts/config.md)
## Step 4: Run SeaTunnel Application
@@ -112,7 +112,7 @@ row=16 : SGZCr, 94186144
## What's More
-- Start write your own config file now, choose the
[connector](../../connector-v2/source) you want to use, and configure the
parameters according to the connector's documentation.
-- See [SeaTunnel With Spark](../../other-engine/spark.md) if you want to know
more about SeaTunnel With Spark.
+- Start write your own config file now, choose the
[connector](../../connectors/source) you want to use, and configure the
parameters according to the connector's documentation.
+- See [SeaTunnel With Spark](../../engines/spark.md) if you want to know more
about SeaTunnel With Spark.
- SeaTunnel have a builtin engine named `Zeta`, and it's the default engine of
SeaTunnel. You can follow [Quick Start](quick-start-seatunnel-engine.md) to
configure and run a data synchronization job.
diff --git a/docs/en/introduction/about.md b/docs/en/introduction/about.md
index 50a24d71bf..dbf45f902d 100644
--- a/docs/en/introduction/about.md
+++ b/docs/en/introduction/about.md
@@ -47,11 +47,11 @@ The default engine use by SeaTunnel is [SeaTunnel
Engine](../engines/zeta/about.
## Connector
-- **Source Connectors** SeaTunnel supports reading data from various
relational, graph, NoSQL, document, and memory databases; distributed file
systems such as HDFS; and a variety of cloud storage solutions, such as S3 and
OSS. We also support data reading of many common SaaS services. You can access
the detailed list [Here](connector-v2/source). If you want, You can develop
your own source connector and easily integrate it into SeaTunnel.
+- **Source Connectors** SeaTunnel supports reading data from various
relational, graph, NoSQL, document, and memory databases; distributed file
systems such as HDFS; and a variety of cloud storage solutions, such as S3 and
OSS. We also support data reading of many common SaaS services. You can access
the detailed list [Here](../connectors/source). If you want, You can develop
your own source connector and easily integrate it into SeaTunnel.
- **Transform Connector** If the schema is different between source and Sink,
You can use the Transform Connector to change the schema read from source and
make it the same as the Sink schema.
-- **Sink Connector** SeaTunnel supports writing data to various relational,
graph, NoSQL, document, and memory databases; distributed file systems such as
HDFS; and a variety of cloud storage solutions, such as S3 and OSS. We also
support writing data to many common SaaS services. You can access the detailed
list [Here](connector-v2/sink). If you want, you can develop your own Sink
connector and easily integrate it into SeaTunnel.
+- **Sink Connector** SeaTunnel supports writing data to various relational,
graph, NoSQL, document, and memory databases; distributed file systems such as
HDFS; and a variety of cloud storage solutions, such as S3 and OSS. We also
support writing data to many common SaaS services. You can access the detailed
list [Here](../connectors/sink). If you want, you can develop your own Sink
connector and easily integrate it into SeaTunnel.
## Who Uses SeaTunnel
@@ -68,4 +68,4 @@ SeaTunnel enriches the <a
href="https://landscape.cncf.io/?item=app-definition-a
## Learn more
-You can see [Quick Start](start-v2/locally/deployment.md) for the next steps.
+You can see [Quick Start](../getting-started/locally/deployment.md) for the
next steps.
diff --git a/docs/en/introduction/concepts/config.md
b/docs/en/introduction/concepts/config.md
index 4fc2942785..531ee296fc 100644
--- a/docs/en/introduction/concepts/config.md
+++ b/docs/en/introduction/concepts/config.md
@@ -84,7 +84,7 @@ For flink and spark engine, the specific configuration rules
of their parameters
Source is used to define where SeaTunnel needs to fetch data, and use the
fetched data for the next step.
Multiple sources can be defined at the same time. The supported source can be
found
-in [Source of SeaTunnel](../connector-v2/source). Each source has its own
specific parameters to define how to
+in [Source of SeaTunnel](../connectors/source). Each source has its own
specific parameters to define how to
fetch data, and SeaTunnel also extracts the parameters that each source will
use, such as
the `plugin_output` parameter, which is used to specify the name of the data
generated by the current
source, which is convenient for follow-up used by other modules.
@@ -135,7 +135,7 @@ in [Transform V2 of SeaTunnel](../transform-v2)
Our purpose with SeaTunnel is to synchronize data from one place to another,
so it is critical to define how
and where data is written. With the sink module provided by SeaTunnel, you can
complete this operation quickly
and efficiently. Sink and source are very similar, but the difference is
reading and writing. So please check out
-[Supported Sinks](../connector-v2/sink).
+[Supported Sinks](../connectors/sink).
### Other Information
@@ -337,6 +337,6 @@ sink {
## What's More
-- Start write your own config file now, choose the
[connector](../connector-v2/source) you want to use, and configure the
parameters according to the connector's documentation.
+- Start write your own config file now, choose the
[connector](../connectors/source) you want to use, and configure the parameters
according to the connector's documentation.
- If you want to know the details of the format configuration, please see
[HOCON](https://github.com/lightbend/config/blob/main/HOCON.md).
diff --git a/docs/en/introduction/configuration/JobEnvConfig.md
b/docs/en/introduction/configuration/JobEnvConfig.md
index e21864dceb..67064c5ceb 100644
--- a/docs/en/introduction/configuration/JobEnvConfig.md
+++ b/docs/en/introduction/configuration/JobEnvConfig.md
@@ -37,7 +37,7 @@ This parameter configures the parallelism of source and sink.
Specify the method of encryption, if you didn't have the requirement for
encrypting or decrypting config files, this option can be ignored.
-For more details, you can refer to the documentation [Config Encryption
Decryption](../connector-v2/Config-Encryption-Decryption.md)
+For more details, you can refer to the documentation [Config Encryption
Decryption](../concepts/config-encryption-decryption.md)
## Zeta Engine Parameter
diff --git a/docs/en/introduction/configuration/schema-evolution.md
b/docs/en/introduction/configuration/schema-evolution.md
index 3eb522cb46..c2bcad2fe3 100644
--- a/docs/en/introduction/configuration/schema-evolution.md
+++ b/docs/en/introduction/configuration/schema-evolution.md
@@ -15,19 +15,19 @@ Schema Evolution means that the schema of a data table can
be changed and the da
## Supported connectors
### Source
-[Mysql-CDC](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/source/MySQL-CDC.md)
-[Oracle-CDC](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/source/Oracle-CDC.md)
+[Mysql-CDC](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/source/MySQL-CDC.md)
+[Oracle-CDC](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/source/Oracle-CDC.md)
### Sink
-[Jdbc-Mysql](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/sink/Jdbc.md)
-[Jdbc-Oracle](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/sink/Jdbc.md)
-[Jdbc-Postgres](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/sink/Jdbc.md)
-[Jdbc-Dameng](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/sink/Jdbc.md)
-[Jdbc-SqlServer](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/sink/Jdbc.md)
-[StarRocks](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/sink/StarRocks.md)
-[Doris](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/sink/Doris.md)
-[Paimon](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/sink/Paimon.md#Schema-Evolution)
-[Elasticsearch](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/sink/Elasticsearch.md#Schema-Evolution)
+[Jdbc-Mysql](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/sink/Jdbc.md)
+[Jdbc-Oracle](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/sink/Jdbc.md)
+[Jdbc-Postgres](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/sink/Jdbc.md)
+[Jdbc-Dameng](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/sink/Jdbc.md)
+[Jdbc-SqlServer](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/sink/Jdbc.md)
+[StarRocks](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/sink/StarRocks.md)
+[Doris](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/sink/Doris.md)
+[Paimon](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/sink/Paimon.md#Schema-Evolution)
+[Elasticsearch](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/sink/Elasticsearch.md#Schema-Evolution)
Note:
* The schema evolution is not support the transform at now. The schema
evolution of different types of databases(Oracle-CDC -> Jdbc-Mysql)is currently
not supported the default value of the column in ddl.
diff --git a/docs/en/introduction/configuration/sql-config.md
b/docs/en/introduction/configuration/sql-config.md
index 97d756cb2e..32011d24e6 100644
--- a/docs/en/introduction/configuration/sql-config.md
+++ b/docs/en/introduction/configuration/sql-config.md
@@ -178,7 +178,7 @@ CREATE TABLE temp1 AS SELECT id, name, age, email FROM
source_table;
```
* This syntax creates a temporary table with the result of a `SELECT` query,
used for `INSERT INTO` operations.
-* The syntax of the `SELECT` part refers to: [SQL
Transform](../transform-v2/sql.md) `query` configuration item
+* The syntax of the `SELECT` part refers to: [SQL
Transform](../../transforms/sql.md) `query` configuration item
```sql
CREATE TABLE temp1 AS SELECT id, name, age, email FROM source_table;
diff --git a/docs/en/transforms/llm.md b/docs/en/transforms/llm.md
index 8ebc13d1cf..7bc314a5b2 100644
--- a/docs/en/transforms/llm.md
+++ b/docs/en/transforms/llm.md
@@ -164,7 +164,7 @@ Transform plugin common parameters, please refer to
[Transform Plugin](common-op
## tips
The API interface usually has a rate limit, which can be configured with
Seatunnel's speed limit to ensure smooth operation of the task.
-For details about Seatunnel speed limit Settings, please refer to
[speed-limit](../concept/speed-limit.md) for details.
+For details about Seatunnel speed limit Settings, please refer to
[speed-limit](../introduction/concepts/speed-limit.md) for details.
## Example OPENAI
diff --git a/docs/zh/connectors/sink/Assert.md
b/docs/zh/connectors/sink/Assert.md
index 5838e5ca8d..10bb4a179e 100644
--- a/docs/zh/connectors/sink/Assert.md
+++ b/docs/zh/connectors/sink/Assert.md
@@ -90,7 +90,7 @@ Assert 数据接收器是一个用于断言数据是否符合用户定义规则
`equals_to`用于比较字段值是否等于配置的预期值。用户可以将所有类型的值分配给`equals_to`。这些类型在[这里](../../introduction/concepts/schema-feature.md#目前支持哪些类型)有详细说明。
例如,如果一个字段是一个包含三个字段的行,行类型的声明是`{a = array<string>, b = map<string, decimal(30,
2)>, c={c_0 = int, b = string}}`,用户可以将值`[["a", "b"], { k0 = 9999.99, k1 =
111.11 }, [123, "abcd"]]`分配给`equals_to`。
-> 定义字段值的方式与[FakeSource](../../connector-v2/source/FakeSource.md#自定义数据内容简单示例)一致。
+> 定义字段值的方式与[FakeSource](../source/FakeSource.md#自定义数据内容简单示例)一致。
>
> `equals_to`不能应用于`null`类型字段。但是,用户可以使用规则类型`NULL`进行验证,例如`{rule_type = NULL}`。
diff --git a/docs/zh/connectors/sink/DB2.md b/docs/zh/connectors/sink/DB2.md
index dd1554f7e5..e1e1e35b5d 100644
--- a/docs/zh/connectors/sink/DB2.md
+++ b/docs/zh/connectors/sink/DB2.md
@@ -112,12 +112,12 @@ source {
}
}
# 如果你想了解更多关于如何配置seatunnel的信息,并查看完整的源插件列表,
- # 请前往 https://seatunnel.apache.org/docs/connector-v2/source
+ # 请前往 https://seatunnel.apache.org/docs/connectors/source
}
transform {
# 如果你想了解更多关于如何配置seatunnel的信息,并查看转换插件的完整列表
- # 请前往 https://seatunnel.apache.org/docs/transform-v2
+ # 请前往 https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -129,7 +129,7 @@ sink {
query = "insert into test_table(name,age) values(?,?)"
}
# 如果你想了解更多关于如何配置seatunnel的信息,并查看完整的接收插件列表,
- # 请前往 https://seatunnel.apache.org/docs/connector-v2/sink
+ # 请前往 https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/zh/connectors/sink/DuckDB.md
b/docs/zh/connectors/sink/DuckDB.md
index a384b76c50..c934a3d502 100644
--- a/docs/zh/connectors/sink/DuckDB.md
+++ b/docs/zh/connectors/sink/DuckDB.md
@@ -30,8 +30,8 @@ import ChangeLog from '../changelog/connector-jdbc.md';
## 主要功能
-- [x] [精确一次](../../concept/connector-v2-features.md)
-- [x] [CDC](../../concept/connector-v2-features.md)
+- [x] [精确一次](../../introduction/concepts/connector-v2-features.md)
+- [x] [CDC](../../introduction/concepts/connector-v2-features.md)
> 使用 `Xa 事务` 来确保 `精确一次`。因此只支持支持 `Xa 事务` 的数据库的 `精确一次`。您可以设置
> `is_exactly_once=true` 来启用它。
diff --git a/docs/zh/connectors/sink/GraphQL.md
b/docs/zh/connectors/sink/GraphQL.md
index 45210e7157..7c19a4a36c 100644
--- a/docs/zh/connectors/sink/GraphQL.md
+++ b/docs/zh/connectors/sink/GraphQL.md
@@ -12,9 +12,9 @@ import ChangeLog from '../changelog/connector-graphql.md';
## 主要特性
-- [ ] [[精确一次]](../../concept/connector-v2-features.md)
-- [ ] [变更数据捕获](../../concept/connector-v2-features.md)
-- [x] [支持多表写入](../../concept/connector-v2-features.md)
+- [ ] [[精确一次]](../../introduction/concepts/connector-v2-features.md)
+- [ ] [变更数据捕获](../../introduction/concepts/connector-v2-features.md)
+- [x] [支持多表写入](../../introduction/concepts/connector-v2-features.md)
## 描述
diff --git a/docs/zh/connectors/sink/HdfsFile.md
b/docs/zh/connectors/sink/HdfsFile.md
index 9f83f6d449..1f98ea7099 100644
--- a/docs/zh/connectors/sink/HdfsFile.md
+++ b/docs/zh/connectors/sink/HdfsFile.md
@@ -149,12 +149,12 @@ source {
}
}
# 如果您想获取有关如何配置 seatunnel 的更多信息和查看完整的源端插件列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/source
+ # 请访问 https://seatunnel.apache.org/docs/connectors/source
}
transform {
# 如果您想获取有关如何配置 seatunnel 的更多信息和查看完整的转换插件列表,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2
+ # 请访问 https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -164,7 +164,7 @@ sink {
file_format_type = "orc"
}
# 如果您想获取有关如何配置 seatunnel 的更多信息和查看完整的接收器插件列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/sink
+ # 请访问 https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/zh/connectors/sink/Kingbase.md
b/docs/zh/connectors/sink/Kingbase.md
index 062320a2fd..1ae8eae8e5 100644
--- a/docs/zh/connectors/sink/Kingbase.md
+++ b/docs/zh/connectors/sink/Kingbase.md
@@ -118,12 +118,12 @@ source {
}
}
# 如果您想了解更多关于如何配置 seatunnel 和查看源插件的完整列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/source
+ # 请访问 https://seatunnel.apache.org/docs/connectors/source
}
transform {
# 如果您想了解更多关于如何配置 seatunnel 和查看转换插件的完整列表,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2
+ # 请访问 https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -135,7 +135,7 @@ sink {
query = "insert into
test_table(c_string,c_boolean,c_tinyint,c_smallint,c_int,c_bigint,c_float,c_double,c_decimal,c_date,c_time,c_timestamp)
values(?,?,?,?,?,?,?,?,?,?,?,?)"
}
# 如果您想了解更多关于如何配置 seatunnel 和查看 sink 插件的完整列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/sink
+ # 请访问 https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/zh/connectors/sink/Mysql.md b/docs/zh/connectors/sink/Mysql.md
index c3147a7799..efc80df676 100644
--- a/docs/zh/connectors/sink/Mysql.md
+++ b/docs/zh/connectors/sink/Mysql.md
@@ -124,12 +124,12 @@ source {
}
}
#如果你想了解更多关于如何配置seatunnel的信息,并查看完整的源插件列表,
- #请前往https://seatunnel.apache.org/docs/connector-v2/source
+ #请前往https://seatunnel.apache.org/docs/connectors/source
}
transform {
#如果你想了解更多关于如何配置seatunnel的信息,并查看转换插件的完整列表,
- #请前往https://seatunnel.apache.org/docs/transform-v2
+ #请前往https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -141,7 +141,7 @@ sink {
query = "insert into test_table(name,age) values(?,?)"
}
#如果你想了解更多关于如何配置seatunnel的信息,并查看完整的sink插件列表,
- #请前往https://seatunnel.apache.org/docs/connector-v2/sink
+ #请前往https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/zh/connectors/sink/OceanBase.md
b/docs/zh/connectors/sink/OceanBase.md
index 6001d14f1f..54bc2635a9 100644
--- a/docs/zh/connectors/sink/OceanBase.md
+++ b/docs/zh/connectors/sink/OceanBase.md
@@ -122,12 +122,12 @@ source {
}
}
# 如果你想了解更多关于如何配置seatunnel的信息,并查看完整的source插件列表,
- # 请前往https://seatunnel.apache.org/docs/connector-v2/source
+ # 请前往https://seatunnel.apache.org/docs/connectors/source
}
transform {
# 如果你想了解更多关于如何配置seatunnel的信息,并查看transform插件的完整列表,
- # 请前往https://seatunnel.apache.org/docs/transform-v2
+ # 请前往https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -140,7 +140,7 @@ sink {
query = "insert into test_table(name,age) values(?,?)"
}
# 如果你想了解更多关于如何配置seatunnel的信息,并查看完整的sink插件列表,
- # 请前往https://seatunnel.apache.org/docs/connector-v2/sink
+ # 请前往https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/zh/connectors/sink/Oracle.md
b/docs/zh/connectors/sink/Oracle.md
index d26114db0e..50e6258d5e 100644
--- a/docs/zh/connectors/sink/Oracle.md
+++ b/docs/zh/connectors/sink/Oracle.md
@@ -121,12 +121,12 @@ source {
}
}
#如果你想了解更多关于如何配置seatunnel的信息,并查看完整的源插件列表,
- #请前往https://seatunnel.apache.org/docs/connector-v2/source
+ #请前往https://seatunnel.apache.org/docs/connectors/source
}
transform {
#如果你想了解更多关于如何配置seatunnel的信息,并查看转换插件的完整列表,
- #请前往https://seatunnel.apache.org/docs/transform-v2
+ #请前往https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -138,7 +138,7 @@ sink {
query = "INSERT INTO TEST.TEST_TABLE(NAME,AGE) VALUES(?,?)"
}
#如果你想了解更多关于如何配置seatunnel的信息,并查看完整的sink插件列表,
- #请前往https://seatunnel.apache.org/docs/connector-v2/sink
+ #请前往https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/zh/connectors/sink/Paimon.md
b/docs/zh/connectors/sink/Paimon.md
index 2d0b36f1b6..0d98554cd7 100644
--- a/docs/zh/connectors/sink/Paimon.md
+++ b/docs/zh/connectors/sink/Paimon.md
@@ -96,7 +96,7 @@ Paimon表的changelog产生模式有[四种](https://paimon.apache.org/docs/mast
*
[`lookup`](https://paimon.apache.org/docs/master/primary-key-table/changelog-producer/#lookup)
*
[`full-compaction`](https://paimon.apache.org/docs/master/primary-key-table/changelog-producer/#full-compaction)
> 注意:
->
当你使用流模式去读paimon表的数据时,不同模式将会产生[不同的结果](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/source/Paimon.md#changelog)。
+>
当你使用流模式去读paimon表的数据时,不同模式将会产生[不同的结果](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/source/Paimon.md#changelog)。
## 文件系统
Paimon连接器支持向多文件系统写入数据。目前支持的文件系统有hdfs和s3。
diff --git a/docs/zh/connectors/sink/PostgreSql.md
b/docs/zh/connectors/sink/PostgreSql.md
index f24ac91d6c..885721eef3 100644
--- a/docs/zh/connectors/sink/PostgreSql.md
+++ b/docs/zh/connectors/sink/PostgreSql.md
@@ -160,12 +160,12 @@ source {
}
}
# If you would like to get more information about how to configure seatunnel
and see full list of source plugins,
- # please go to https://seatunnel.apache.org/docs/connector-v2/source
+ # please go to https://seatunnel.apache.org/docs/connectors/source
}
transform {
# If you would like to get more information about how to configure seatunnel
and see full list of transform plugins,
- # please go to https://seatunnel.apache.org/docs/transform-v2
+ # please go to https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -178,7 +178,7 @@ sink {
query = "insert into test_table(name,age) values(?,?)"
}
# If you would like to get more information about how to configure seatunnel
and see full list of sink plugins,
- # please go to https://seatunnel.apache.org/docs/connector-v2/sink
+ # please go to https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/zh/connectors/sink/S3File.md
b/docs/zh/connectors/sink/S3File.md
index bec3cf2aef..0cd641633c 100644
--- a/docs/zh/connectors/sink/S3File.md
+++ b/docs/zh/connectors/sink/S3File.md
@@ -367,13 +367,13 @@ source {
}
}
# 如果您想了解更多关于如何配置SeaTunnel以及查看完整的源插件列表,
-# 请访问 https://seatunnel.apache.org/docs/connector-v2/source
+# 请访问 https://seatunnel.apache.org/docs/connectors/source
source {
}
transform {
# 如果您想了解更多关于如何配置SeaTunnel以及查看完整的转换插件列表,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2
+ # 请访问 https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -401,7 +401,7 @@ sink {
}
}
# 如果您想了解更多关于如何配置SeaTunnel以及查看完整的接收插件列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/sink
+ # 请访问 https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/zh/connectors/sink/Sls.md b/docs/zh/connectors/sink/Sls.md
index 59d4c98231..6b104a12b3 100644
--- a/docs/zh/connectors/sink/Sls.md
+++ b/docs/zh/connectors/sink/Sls.md
@@ -44,7 +44,7 @@ Sink connector for Aliyun Sls.
### 简单示例
->
此示例写入sls的logstore1的数据。如果您尚未安装和部署SeaTunnel,则需要按照安装SeaTunnel中的说明安装和部署SeaTunnel。然后按照[快速启动SeaTunnel引擎](../../Start-v2/locale/Quick-Start
SeaTunnel Engine.md)中的说明运行此作业。
+>
此示例写入sls的logstore1的数据。如果您尚未安装和部署SeaTunnel,则需要按照安装SeaTunnel中的说明安装和部署SeaTunnel。然后按照[快速启动SeaTunnel引擎](../../getting-started/locally/quick-start-seatunnel-engine.md)中的说明运行此作业。
[创建RAM用户及授权](https://help.aliyun.com/zh/sls/create-a-ram-user-and-authorize-the-ram-user-to-access-log-service?spm=a2c4g.11186623.0.i4),
请确认RAM用户有足够的权限来读取及管理数据,参考:[RAM自定义授权示例](https://help.aliyun.com/zh/sls/use-custom-policies-to-grant-permissions-to-a-ram-user?spm=a2c4g.11186623.0.0.4a6e4e554CKhSc#reference-s3z-m1l-z2b)
diff --git a/docs/zh/connectors/sink/Snowflake.md
b/docs/zh/connectors/sink/Snowflake.md
index 947ba906a2..05fd64134c 100644
--- a/docs/zh/connectors/sink/Snowflake.md
+++ b/docs/zh/connectors/sink/Snowflake.md
@@ -100,12 +100,12 @@ source {
}
}
# 如果您想了解更多关于如何配置SeaTunnel的信息,并查看完整的源插件列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/source
+ # 请访问 https://seatunnel.apache.org/docs/connectors/source
}
transform {
# 如果您想了解更多关于如何配置SeaTunnel的信息,并查看完整的转换插件列表,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2
+ # 请访问 https://seatunnel.apache.org/docs/transforms
}
sink {
jdbc {
@@ -116,7 +116,7 @@ sink {
query = "insert into test_table(name,age) values(?,?)"
}
# 如果您想了解更多关于如何配置SeaTunnel的信息,并查看完整的接收器插件列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/sink
+ # 请访问 https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/zh/connectors/sink/SqlServer.md
b/docs/zh/connectors/sink/SqlServer.md
index 68b2fe6b6f..6e65d6002f 100644
--- a/docs/zh/connectors/sink/SqlServer.md
+++ b/docs/zh/connectors/sink/SqlServer.md
@@ -117,12 +117,12 @@ source {
partition_num = 10
}
# 如果想了解更多关于如何配置 SeaTunnel 的信息,并查看完整的源插件列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/source/Jdbc
+ # 请访问 https://seatunnel.apache.org/docs/connectors/source/Jdbc
}
transform {
# 如果想了解更多关于如何配置 SeaTunnel 的信息,并查看完整的转换插件列表,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2/sql
+ # 请访问 https://seatunnel.apache.org/docs/transforms/sql
}
sink {
@@ -134,7 +134,7 @@ sink {
query = "insert into full_types_jdbc_sink( id, val_char, val_varchar,
val_text, val_nchar, val_nvarchar, val_ntext, val_decimal, val_numeric,
val_float, val_real, val_smallmoney, val_money, val_bit, val_tinyint,
val_smallint, val_int, val_bigint, val_date, val_time, val_datetime2,
val_datetime, val_smalldatetime ) values( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,
?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )"
}
# 如果想了解更多关于如何配置 SeaTunnel 的信息,并查看完整的接收器插件列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/sink/Jdbc
+ # 请访问 https://seatunnel.apache.org/docs/connectors/sink/Jdbc
}
```
@@ -173,7 +173,7 @@ Jdbc {
}
# 如果想了解更多关于如何配置 SeaTunnel 的信息,并查看完整的接收器插件列表,
-# 请访问 https://seatunnel.apache.org/docs/connector-v2/sink/Jdbc
+# 请访问 https://seatunnel.apache.org/docs/connectors/sink/Jdbc
```
## 变更日志
diff --git a/docs/zh/connectors/sink/Vertica.md
b/docs/zh/connectors/sink/Vertica.md
index 8322b3a4b4..91ea229bf5 100644
--- a/docs/zh/connectors/sink/Vertica.md
+++ b/docs/zh/connectors/sink/Vertica.md
@@ -118,12 +118,12 @@ source {
}
}
# 如果想了解更多关于如何配置 SeaTunnel 的信息,并查看完整的源插件列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/source
+ # 请访问 https://seatunnel.apache.org/docs/connectors/source
}
transform {
# 如果想了解更多关于如何配置 SeaTunnel 的信息,并查看完整的转换插件列表,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2
+ # 请访问 https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -135,7 +135,7 @@ sink {
query = "insert into test_table(name,age) values(?,?)"
}
# 如果想了解更多关于如何配置 SeaTunnel 的信息,并查看完整的接收器插件列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/sink
+ # 请访问 https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/zh/connectors/source/AmazonSqs.md
b/docs/zh/connectors/source/AmazonSqs.md
index f2b08d19f7..2cf1ba0b9c 100644
--- a/docs/zh/connectors/source/AmazonSqs.md
+++ b/docs/zh/connectors/source/AmazonSqs.md
@@ -69,7 +69,7 @@ source {
transform {
# 如果你想了解更多关于如何配置seatunnel的信息,并查看转换插件的完整列表,
- # 请前往 https://seatunnel.apache.org/docs/transform-v2/sql
+ # 请前往 https://seatunnel.apache.org/docs/transforms/sql
}
sink {
diff --git a/docs/zh/connectors/source/DB2.md b/docs/zh/connectors/source/DB2.md
index 915ffe18d3..091133dc17 100644
--- a/docs/zh/connectors/source/DB2.md
+++ b/docs/zh/connectors/source/DB2.md
@@ -111,7 +111,7 @@ source{
transform {
# 如果你想了解更多关于如何配置seatunnel的信息,并查看transform插件的完整列表,
- # 请前往 https://seatunnel.apache.org/docs/transform-v2/sql
+ # 请前往 https://seatunnel.apache.org/docs/transforms/sql
}
sink {
diff --git a/docs/zh/connectors/source/DuckDB.md
b/docs/zh/connectors/source/DuckDB.md
index ccf94cb8d8..ddaa6593e1 100644
--- a/docs/zh/connectors/source/DuckDB.md
+++ b/docs/zh/connectors/source/DuckDB.md
@@ -30,12 +30,12 @@ import ChangeLog from '../changelog/connector-jdbc.md';
## 主要功能
-- [x] [批处理](../../concept/connector-v2-features.md)
-- [ ] [流处理](../../concept/connector-v2-features.md)
-- [x] [精确一次](../../concept/connector-v2-features.md)
-- [x] [列投影](../../concept/connector-v2-features.md)
-- [x] [并行度](../../concept/connector-v2-features.md)
-- [x] [支持用户定义的拆分](../../concept/connector-v2-features.md)
+- [x] [批处理](../../introduction/concepts/connector-v2-features.md)
+- [ ] [流处理](../../introduction/concepts/connector-v2-features.md)
+- [x] [精确一次](../../introduction/concepts/connector-v2-features.md)
+- [x] [列投影](../../introduction/concepts/connector-v2-features.md)
+- [x] [并行度](../../introduction/concepts/connector-v2-features.md)
+- [x] [支持用户定义的拆分](../../introduction/concepts/connector-v2-features.md)
> 支持 SQL 查询,并能实现列投影效果
@@ -157,7 +157,7 @@ source{
transform {
# 如果您想了解更多关于如何配置 seatunnel 和查看转换插件的完整列表,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2/sql
+ # 请访问 https://seatunnel.apache.org/docs/transforms/sql
}
sink {
diff --git a/docs/zh/connectors/source/GraphQL.md
b/docs/zh/connectors/source/GraphQL.md
index c50a3ef06b..20bdb0ec3a 100644
--- a/docs/zh/connectors/source/GraphQL.md
+++ b/docs/zh/connectors/source/GraphQL.md
@@ -10,9 +10,9 @@ import ChangeLog from '../changelog/connector-graphql.md';
## 主要特性
-- [x] [批处理](../../concept/connector-v2-features.md)
-- [x] [流处理](../../concept/connector-v2-features.md)
-- [ ] [并行](../../concept/connector-v2-features.md)
+- [x] [批处理](../../introduction/concepts/connector-v2-features.md)
+- [x] [流处理](../../introduction/concepts/connector-v2-features.md)
+- [ ] [并行](../../introduction/concepts/connector-v2-features.md)
## 源选项
diff --git a/docs/zh/connectors/source/HdfsFile.md
b/docs/zh/connectors/source/HdfsFile.md
index e1c67dfdbd..922e5bbbb7 100644
--- a/docs/zh/connectors/source/HdfsFile.md
+++ b/docs/zh/connectors/source/HdfsFile.md
@@ -12,20 +12,20 @@ import ChangeLog from
'../changelog/connector-file-hadoop.md';
## 主要特性
-- [x] [多模态](../../concept/connector-v2-features.md#多模态multimodal)
+- [x] [多模态](../../introduction/concepts/connector-v2-features.md#多模态multimodal)
使用二进制文件格式读取和写入任何格式的文件,例如视频、图片等。简而言之,任何文件都可以同步到目标位置。
-- [x] [批处理](../../concept/connector-v2-features.md)
-- [ ] [流处理](../../concept/connector-v2-features.md)
-- [x] [精确一次](../../concept/connector-v2-features.md)
+- [x] [批处理](../../introduction/concepts/connector-v2-features.md)
+- [ ] [流处理](../../introduction/concepts/connector-v2-features.md)
+- [x] [精确一次](../../introduction/concepts/connector-v2-features.md)
在 pollNext 调用中读取分片中的所有数据。读取的分片将保存在快照中。
-- [x] [列投影](../../concept/connector-v2-features.md)
-- [x] [并行度](../../concept/connector-v2-features.md)
-- [ ] [支持用户定义分片](../../concept/connector-v2-features.md)
-- [x] [支持多表读](../../concept/connector-v2-features.md)
+- [x] [列投影](../../introduction/concepts/connector-v2-features.md)
+- [x] [并行度](../../introduction/concepts/connector-v2-features.md)
+- [ ] [支持用户定义分片](../../introduction/concepts/connector-v2-features.md)
+- [x] [支持多表读](../../introduction/concepts/connector-v2-features.md)
- [x] 文件格式类型
- [x] text
- [x] csv
@@ -338,12 +338,12 @@ source {
fs.defaultFS = "hdfs://namenode001"
}
# 如果您想获取有关如何配置 seatunnel 的更多信息和查看完整的数据源插件列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/source
+ # 请访问 https://seatunnel.apache.org/docs/connectors/source
}
transform {
# 如果您想获取有关如何配置 seatunnel 的更多信息和查看完整的转换插件列表,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2
+ # 请访问 https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -353,7 +353,7 @@ sink {
file_format_type = "orc"
}
# 如果您想获取有关如何配置 seatunnel 的更多信息和查看完整的接收器插件列表,
- # 请访问 https://seatunnel.apache.org/docs/connector-v2/sink
+ # 请访问 https://seatunnel.apache.org/docs/connectors/sink
}
```
diff --git a/docs/zh/connectors/source/HiveJdbc.md
b/docs/zh/connectors/source/HiveJdbc.md
index 48dea61b54..a561ee1a04 100644
--- a/docs/zh/connectors/source/HiveJdbc.md
+++ b/docs/zh/connectors/source/HiveJdbc.md
@@ -113,7 +113,7 @@ source{
transform {
# If you would like to get more information about how to configure
seatunnel and see full list of transform plugins,
- # please go to https://seatunnel.apache.org/docs/transform-v2/sql
+ # please go to https://seatunnel.apache.org/docs/transforms/sql
}
sink {
diff --git a/docs/zh/connectors/source/Kafka.md
b/docs/zh/connectors/source/Kafka.md
index 058dc4d66f..5923a85c87 100644
--- a/docs/zh/connectors/source/Kafka.md
+++ b/docs/zh/connectors/source/Kafka.md
@@ -78,7 +78,7 @@ debezium_record_table_filter {
## 元数据支持
-Kafka 源会在 `ConsumerRecord.timestamp` 大于等于 0 时,将其自动写入 SeaTunnel 行的 `EventTime`
元数据。可以借助 [Metadata 转换](../../transform-v2/metadata.md) 把这段时间戳暴露为普通字段,方便做分区或下游
SQL 处理。
+Kafka 源会在 `ConsumerRecord.timestamp` 大于等于 0 时,将其自动写入 SeaTunnel 行的 `EventTime`
元数据。可以借助 [Metadata 转换](../../transforms/metadata.md) 把这段时间戳暴露为普通字段,方便做分区或下游 SQL
处理。
```hocon
source {
diff --git a/docs/zh/connectors/source/Mysql.md
b/docs/zh/connectors/source/Mysql.md
index 69b80986ed..d5a683715b 100644
--- a/docs/zh/connectors/source/Mysql.md
+++ b/docs/zh/connectors/source/Mysql.md
@@ -204,7 +204,7 @@ source{
transform {
# 如果您想了解更多关于如何配置 SeaTunnel 的信息,并查看完整的转换插件列表,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2/sql
+ # 请访问 https://seatunnel.apache.org/docs/transforms/sql
}
sink {
diff --git a/docs/zh/connectors/source/PostgreSQL.md
b/docs/zh/connectors/source/PostgreSQL.md
index c99f20b6e6..3cba8564f4 100644
--- a/docs/zh/connectors/source/PostgreSQL.md
+++ b/docs/zh/connectors/source/PostgreSQL.md
@@ -188,7 +188,7 @@ source{
}
transform {
- # please go to https://seatunnel.apache.org/docs/transform-v2/sql
+ # please go to https://seatunnel.apache.org/docs/transforms/sql
}
sink {
diff --git a/docs/zh/connectors/source/S3File.md
b/docs/zh/connectors/source/S3File.md
index 75de5ef656..8ad518efad 100644
--- a/docs/zh/connectors/source/S3File.md
+++ b/docs/zh/connectors/source/S3File.md
@@ -431,7 +431,7 @@ source {
transform {
# 如果您想获取有关如何配置seatunnel和查看转换插件完整列表的更多信息,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2
+ # 请访问 https://seatunnel.apache.org/docs/transforms
}
sink {
@@ -493,7 +493,7 @@ source {
transform {
# 如果您想获取有关如何配置seatunnel和查看转换插件完整列表的更多信息,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2
+ # 请访问 https://seatunnel.apache.org/docs/transforms
}
sink {
diff --git a/docs/zh/connectors/source/Sls.md b/docs/zh/connectors/source/Sls.md
index 29a61e26c8..777e976479 100644
--- a/docs/zh/connectors/source/Sls.md
+++ b/docs/zh/connectors/source/Sls.md
@@ -51,7 +51,7 @@ import ChangeLog from '../changelog/connector-sls.md';
### 简单示例
->
此示例读取sls的logstore1的数据并将其打印到客户端。如果您尚未安装和部署SeaTunnel,则需要按照安装SeaTunnel中的说明安装和部署SeaTunnel。然后按照[快速启动SeaTunnel引擎](../../Start-v2/locale/Quick-Start
SeaTunnel Engine.md)中的说明运行此作业。
+>
此示例读取sls的logstore1的数据并将其打印到客户端。如果您尚未安装和部署SeaTunnel,则需要按照安装SeaTunnel中的说明安装和部署SeaTunnel。然后按照[快速启动SeaTunnel引擎](../../getting-started/locally/quick-start-seatunnel-engine.md)中的说明运行此作业。
[创建RAM用户及授权](https://help.aliyun.com/zh/sls/create-a-ram-user-and-authorize-the-ram-user-to-access-log-service?spm=a2c4g.11186623.0.i4),
请确认RAM用户有足够的权限来读取及管理数据,参考:[RAM自定义授权示例](https://help.aliyun.com/zh/sls/use-custom-policies-to-grant-permissions-to-a-ram-user?spm=a2c4g.11186623.0.0.4a6e4e554CKhSc#reference-s3z-m1l-z2b)
diff --git a/docs/zh/connectors/source/Snowflake.md
b/docs/zh/connectors/source/Snowflake.md
index 324bc2064c..6d36db5065 100644
--- a/docs/zh/connectors/source/Snowflake.md
+++ b/docs/zh/connectors/source/Snowflake.md
@@ -104,7 +104,7 @@ import ChangeLog from '../changelog/connector-jdbc.md';
}
transform {
# 如果您想了解有关如何配置 seatunnel 的更多信息并查看完整的转换插件列表,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2/sql
+ # 请访问 https://seatunnel.apache.org/docs/transforms/sql
}
sink {
Console {}
diff --git a/docs/zh/connectors/source/SqlServer.md
b/docs/zh/connectors/source/SqlServer.md
index bb35541326..ce82fb10e2 100644
--- a/docs/zh/connectors/source/SqlServer.md
+++ b/docs/zh/connectors/source/SqlServer.md
@@ -182,7 +182,7 @@ source{
transform {
# 如果你想了解更多关于如何配置 seatunnel 的信息,并查看转换插件的完整列表,
- # 请前往 https://seatunnel.apache.org/docs/transform-v2/sql
+ # 请前往 https://seatunnel.apache.org/docs/transforms/sql
}
sink {
@@ -217,7 +217,7 @@ source {
transform {
# If you would like to get more information about how to configure
seatunnel and see full list of transform plugins,
- # please go to https://seatunnel.apache.org/docs/transform-v2/sql
+ # please go to https://seatunnel.apache.org/docs/transforms/sql
}
sink {
@@ -251,19 +251,19 @@ source {
}
# 如果你想了解更多关于如何配置 seatunnel 的信息,并查看源插件的完整列表,
- # 请前往 https://seatunnel.apache.org/docs/connector-v2/source/Jdbc
+ # 请前往 https://seatunnel.apache.org/docs/connectors/source/Jdbc
}
transform {
# 如果你想了解更多关于如何配置 seatunnel 的信息,并查看转换插件的完整列表,
- # 请前往 https://seatunnel.apache.org/docs/transform-v2/sql
+ # 请前往 https://seatunnel.apache.org/docs/transforms/sql
}
sink {
Console {}
- # 如果你想了解更多关于如何配置 seatunnel 的信息,并查看 Sink 插件的完整列表,
- # 请前往 https://seatunnel.apache.org/docs/connector-v2/sink/Jdbc
+ # 如果你想了解更多关于如何配置 seatunnel 的信息,并查看汇插件的完整列表,
+ # 请前往 https://seatunnel.apache.org/docs/connectors/sink/Jdbc
}
```
diff --git a/docs/zh/connectors/source/Vertica.md
b/docs/zh/connectors/source/Vertica.md
index 824c86c987..85e2601c7c 100644
--- a/docs/zh/connectors/source/Vertica.md
+++ b/docs/zh/connectors/source/Vertica.md
@@ -108,7 +108,7 @@ source{
transform {
# 如果您想了解有关如何配置 seatunnel 的更多信息并查看完整的转换插件列表,
- # 请访问 https://seatunnel.apache.org/docs/transform-v2/sql
+ # 请访问 https://seatunnel.apache.org/docs/transforms/sql
}
sink {
diff --git a/docs/zh/engines/flink.md b/docs/zh/engines/flink.md
index 06f51a82b4..723dcfa7a5 100644
--- a/docs/zh/engines/flink.md
+++ b/docs/zh/engines/flink.md
@@ -70,7 +70,7 @@ source {
transform {
# 如果你想知道更多关于如何配置seatunnel的信息和查看完整的transform插件,
- # 请访问:https://seatunnel.apache.org/docs/transform-v2/sql
+ # 请访问:https://seatunnel.apache.org/docs/transforms/sql
}
sink{
diff --git a/docs/zh/engines/zeta/about.md b/docs/zh/engines/zeta/about.md
index 09f836dc41..be06ee7c89 100644
--- a/docs/zh/engines/zeta/about.md
+++ b/docs/zh/engines/zeta/about.md
@@ -36,7 +36,7 @@ SeaTunnel Engine 的整体设计遵循以下路径:
### 快速开始
-https://seatunnel.apache.org/docs/start-v2/locally/quick-start-seatunnel-engine
+https://seatunnel.apache.org/docs/getting-started/locally/quick-start-seatunnel-engine
### 下载安装
diff --git a/docs/zh/engines/zeta/rest-api-v1.md
b/docs/zh/engines/zeta/rest-api-v1.md
index 5f43e7854d..5a08e80c49 100644
--- a/docs/zh/engines/zeta/rest-api-v1.md
+++ b/docs/zh/engines/zeta/rest-api-v1.md
@@ -676,7 +676,7 @@ network:
<details>
<summary><code>POST</code>
<code><b>/hazelcast/rest/maps/encrypt-config</b></code>
<code>(如果配置加密成功,则返回加密后的配置。)</code></summary>
-有关自定义加密的更多信息,请参阅文档[配置-加密-解密](../connector-v2/Config-Encryption-Decryption.md).
+有关自定义加密的更多信息,请参阅文档[配置-加密-解密](../../introduction/concepts/config-encryption-decryption.md).
#### 请求体
diff --git a/docs/zh/engines/zeta/rest-api-v2.md
b/docs/zh/engines/zeta/rest-api-v2.md
index 2db79129c1..f840be1c3b 100644
--- a/docs/zh/engines/zeta/rest-api-v2.md
+++ b/docs/zh/engines/zeta/rest-api-v2.md
@@ -921,7 +921,7 @@ curl --location 'http://127.0.0.1:8080/submit-job/upload'
--form 'config_file=@"
<details>
<summary><code>POST</code> <code><b>/encrypt-config</b></code>
<code>(如果配置加密成功,则返回加密后的配置。)</code></summary>
-有关自定义加密的更多信息,请参阅文档[配置-加密-解密](../connector-v2/Config-Encryption-Decryption.md).
+有关自定义加密的更多信息,请参阅文档[配置-加密-解密](../../introduction/concepts/config-encryption-decryption.md).
#### 请求体
diff --git a/docs/zh/faq.md b/docs/zh/faq.md
index 9850ede675..b39b543db9 100644
--- a/docs/zh/faq.md
+++ b/docs/zh/faq.md
@@ -2,8 +2,8 @@
## SeaTunnel 支持哪些数据来源和数据目的地?
SeaTunnel 支持多种数据源来源和数据目的地,您可以在官网找到详细的列表:
-SeaTunnel
支持的数据来源(Source)列表:https://seatunnel.apache.org/docs/connector-v2/source
-SeaTunnel 支持的数据目的地(Sink)列表:https://seatunnel.apache.org/docs/connector-v2/sink
+SeaTunnel 支持的数据来源(Source)列表:https://seatunnel.apache.org/docs/connectors/source
+SeaTunnel 支持的数据目的地(Sink)列表:https://seatunnel.apache.org/docs/connectors/sink
## SeaTunnel 是否支持批处理和流处理?
SeaTunnel 支持批流一体,SeaTunnel
可以设置批处理和流处理两种模式。您可以根据具体的业务场景和需求选择合适的处理模式。批处理适合定时数据同步场景,而流处理适合实时同步和数据变更捕获 (CDC)
场景。
@@ -13,7 +13,7 @@ Spark 和 Flink 不是必需的,SeaTunnel 可以支持 Zeta、Spark 和 Flink
社区对 Zeta 的支持力度是最大的,功能也更丰富。
## SeaTunnel 支持的数据转换功能有哪些?
-SeaTunnel 支持多种数据转换功能,包括字段映射、数据过滤、数据格式转换等。可以通过在配置文件中定义 `transform`
模块来实现数据转换。详情请参考 SeaTunnel [Transform
文档](https://seatunnel.apache.org/docs/transform-v2)。
+SeaTunnel 支持多种数据转换功能,包括字段映射、数据过滤、数据格式转换等。可以通过在配置文件中定义 `transform`
模块来实现数据转换。详情请参考 SeaTunnel [Transform
文档](https://seatunnel.apache.org/docs/transforms)。
## SeaTunnel 是否可以自定义数据清洗规则?
SeaTunnel 支持自定义数据清洗规则。可以在 `transform` 模块中配置自定义规则,例如清理脏数据、删除无效记录或字段转换。
@@ -22,7 +22,7 @@ SeaTunnel 支持自定义数据清洗规则。可以在 `transform` 模块中配
SeaTunnel 支持增量数据同步。例如通过 CDC 连接器实现对数据库的增量同步,适用于需要实时捕获数据变更的场景。
## SeaTunnel 目前支持哪些数据源的 CDC ?
-目前支持 MongoDB CDC、MySQL CDC、Opengauss CDC、Oracle CDC、PostgreSQL CDC、Sql Server
CDC、TiDB
CDC等,更多请查阅[Source](https://seatunnel.apache.org/docs/connector-v2/source)。
+目前支持 MongoDB CDC、MySQL CDC、Opengauss CDC、Oracle CDC、PostgreSQL CDC、Sql Server
CDC、TiDB
CDC等,更多请查阅[Source](https://seatunnel.apache.org/docs/connectors/source)。
## SeaTunnel CDC 同步需要的权限如何开启?
这样就可以了。
@@ -43,7 +43,7 @@ SeaTunnel 支持增量数据同步。例如通过 CDC 连接器实现对数据
- **`CREATE_SCHEMA_WHEN_NOT_EXIST`**:当表不存在时会创建,若表已存在则跳过创建。
- **`ERROR_WHEN_SCHEMA_NOT_EXIST`**:当表不存在时会报错。
- **`IGNORE`**:忽略对表的处理。
- 目前很多 connector 已经支持了自动建表,请参考对应的 connector 文档,这里拿 Jdbc 举例,请参考 [Jdbc
sink](https://seatunnel.apache.org/docs/connector-v2/sink/Jdbc#schema_save_mode-enum)
+ 目前很多 connector 已经支持了自动建表,请参考对应的 connector 文档,这里拿 Jdbc 举例,请参考 [Jdbc
sink](https://seatunnel.apache.org/docs/connectors/sink/Jdbc#schema_save_mode-enum)
## SeaTunnel 是否支持数据同步任务开始前对已有数据进行处理?
在同步任务启动之前,可以为目标端已有的数据选择不同的处理方案。是通过 `data_save_mode` 参数来控制的。
@@ -52,7 +52,7 @@ SeaTunnel 支持增量数据同步。例如通过 CDC 连接器实现对数据
- **`APPEND_DATA`**:保留数据库结构,保留数据。
- **`CUSTOM_PROCESSING`**:用户自定义处理。
- **`ERROR_WHEN_DATA_EXISTS`**:当存在数据时,报错。
- 目前很多 connector 已经支持了对已有数据进行处理,请参考对应的 connector 文档,这里拿 Jdbc 举例,请参考 [Jdbc
sink](https://seatunnel.apache.org/docs/connector-v2/sink/Jdbc#data_save_mode-enum)
+ 目前很多 connector 已经支持了对已有数据进行处理,请参考对应的 connector 文档,这里拿 Jdbc 举例,请参考 [Jdbc
sink](https://seatunnel.apache.org/docs/connectors/sink/Jdbc#data_save_mode-enum)
## SeaTunnel 是否支持精确一致性管理?
SeaTunnel 支持一部分数据源的精确一致性,例如支持 MySQL、PostgreSQL
等数据库的事务写入,确保数据在同步过程中的一致性,另外精确一致性也要看数据库本身是否可以支持
@@ -89,7 +89,7 @@ $SEATUNNEL_HOME/bin/seatunnel.sh \
-i date=20231110
```
-您可以使用参数“-i”或“--variable”后跟“key=value”来指定变量的值,其中key需要与配置中的变量名称相同。详情可以参考:https://seatunnel.apache.org/docs/concept/config
+您可以使用参数“-i”或“--variable”后跟“key=value”来指定变量的值,其中key需要与配置中的变量名称相同。详情可以参考:https://seatunnel.apache.org/docs/introduction/concepts/config
## 如何在配置文件中写入多行文本的配置项?
当配置的文本很长并且想要将其换行时,您可以使用三个双引号来指示其开始和结束:
diff --git a/docs/zh/getting-started/kubernetes/helm.md
b/docs/zh/getting-started/kubernetes/helm.md
index 51bd87dce5..5ee1a4dbf7 100644
--- a/docs/zh/getting-started/kubernetes/helm.md
+++ b/docs/zh/getting-started/kubernetes/helm.md
@@ -76,5 +76,5 @@ curl http://127.0.0.1:5801/system-monitoring-information
后面就可以使用[rest-api-v2](../../engines/zeta/rest-api-v2.md)提交任务了。
## 下一步
-到现在为止,您已经安装好Seatunnel集群了,你可以查看Seatunnel有哪些[连接器](../../connector-v2).
+到现在为止,您已经安装好Seatunnel集群了,你可以查看Seatunnel有哪些[连接器](../../connectors).
或者选择其他方式 [部署](../../engines/zeta/deployment.md).
diff --git a/docs/zh/getting-started/locally/quick-start-flink.md
b/docs/zh/getting-started/locally/quick-start-flink.md
index 69162338d4..a37f301d72 100644
--- a/docs/zh/getting-started/locally/quick-start-flink.md
+++ b/docs/zh/getting-started/locally/quick-start-flink.md
@@ -57,7 +57,7 @@ sink {
```
-关于配置的更多信息请查看[配置的基本概念](../../concept/config.md)
+关于配置的更多信息请查看[配置的基本概念](../../introduction/concepts/config.md)
## 步骤 4: 运行SeaTunnel应用程序
@@ -105,6 +105,6 @@ row=16 : SGZCr, 94186144
## 此外
- 开始编写您自己的配置文件,选择您想要使用的[连接器](../../connectors/source),并根据连接器的文档配置参数。
--
如果您想要了解更多关于SeaTunnel运行在Flink上的信息,请参阅[基于Flink的SeaTunnel](../../other-engine/flink.md)。
+-
如果您想要了解更多关于SeaTunnel运行在Flink上的信息,请参阅[基于Flink的SeaTunnel](../../engines/flink.md)。
-
SeaTunnel有内置的`Zeta`引擎,它是作为SeaTunnel的默认引擎。您可以参考[快速开始](quick-start-seatunnel-engine.md)配置和运行数据同步作业。
diff --git a/docs/zh/getting-started/locally/quick-start-seatunnel-engine.md
b/docs/zh/getting-started/locally/quick-start-seatunnel-engine.md
index 9a24f11dbf..4cdec9d2d9 100644
--- a/docs/zh/getting-started/locally/quick-start-seatunnel-engine.md
+++ b/docs/zh/getting-started/locally/quick-start-seatunnel-engine.md
@@ -51,7 +51,7 @@ sink {
```
-关于配置的更多信息请查看[配置的基本概念](../../concept/config.md)
+关于配置的更多信息请查看[配置的基本概念](../../introduction/concepts/config.md)
## 步骤 3: 运行SeaTunnel应用程序
@@ -155,7 +155,7 @@ sink {
}
```
-关于配置的更多信息请查看[配置的基本概念](../../concept/config.md)
+关于配置的更多信息请查看[配置的基本概念](../../introduction/concepts/config.md)
### 步骤 4: 运行SeaTunnel应用程序
diff --git a/docs/zh/getting-started/locally/quick-start-spark.md
b/docs/zh/getting-started/locally/quick-start-spark.md
index 19e2c7ed86..0539bf390c 100644
--- a/docs/zh/getting-started/locally/quick-start-spark.md
+++ b/docs/zh/getting-started/locally/quick-start-spark.md
@@ -58,7 +58,7 @@ sink {
```
-关于配置的更多信息请查看[配置的基本概念](../../concept/config.md)
+关于配置的更多信息请查看[配置的基本概念](../../introduction/concepts/config.md)
## 步骤 4: 运行SeaTunnel应用程序
@@ -112,6 +112,6 @@ row=16 : SGZCr, 94186144
## 此外
- 开始编写您自己的配置文件,选择您想要使用的[连接器](../../connectors/source),并根据连接器的文档配置参数。
--
如果您想要了解更多关于SeaTunnel运行在Spark上的信息,请参阅[基于Spark的SeaTunnel](../../other-engine/spark.md)。
+-
如果您想要了解更多关于SeaTunnel运行在Spark上的信息,请参阅[基于Spark的SeaTunnel](../../engines/spark.md)。
-
SeaTunnel有内置的`Zeta`引擎,它是作为SeaTunnel的默认引擎。您可以参考[快速开始](quick-start-seatunnel-engine.md)配置和运行数据同步作业。
diff --git a/docs/zh/introduction/configuration/schema-evolution.md
b/docs/zh/introduction/configuration/schema-evolution.md
index 34f8363002..182e31eff0 100644
--- a/docs/zh/introduction/configuration/schema-evolution.md
+++ b/docs/zh/introduction/configuration/schema-evolution.md
@@ -15,19 +15,19 @@
## 已支持的连接器
### 源
-[Mysql-CDC](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/source/MySQL-CDC.md)
-[Oracle-CDC](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/source/Oracle-CDC.md)
+[Mysql-CDC](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/source/MySQL-CDC.md)
+[Oracle-CDC](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/source/Oracle-CDC.md)
### 目标
-[Jdbc-Mysql](https://github.com/apache/seatunnel/blob/dev/docs/zh/connector-v2/sink/Jdbc.md)
-[Jdbc-Oracle](https://github.com/apache/seatunnel/blob/dev/docs/zh/connector-v2/sink/Jdbc.md)
-[Jdbc-Postgres](https://github.com/apache/seatunnel/blob/dev/docs/zh/connector-v2/sink/Jdbc.md)
-[Jdbc-Dameng](https://github.com/apache/seatunnel/blob/dev/docs/zh/connector-v2/sink/Jdbc.md)
-[Jdbc-SqlServer](https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/sink/Jdbc.md)
-[StarRocks](https://github.com/apache/seatunnel/blob/dev/docs/zh/connector-v2/sink/StarRocks.md)
-[Doris](https://github.com/apache/seatunnel/blob/dev/docs/zh/connector-v2/sink/Doris.md)
-[Paimon](https://github.com/apache/seatunnel/blob/dev/docs/zh/connector-v2/sink/Paimon.md#模式演变)
-[Elasticsearch](https://github.com/apache/seatunnel/blob/dev/docs/zh/connector-v2/sink/Elasticsearch.md#模式演变)
+[Jdbc-Mysql](https://github.com/apache/seatunnel/blob/dev/docs/zh/connectors/sink/Jdbc.md)
+[Jdbc-Oracle](https://github.com/apache/seatunnel/blob/dev/docs/zh/connectors/sink/Jdbc.md)
+[Jdbc-Postgres](https://github.com/apache/seatunnel/blob/dev/docs/zh/connectors/sink/Jdbc.md)
+[Jdbc-Dameng](https://github.com/apache/seatunnel/blob/dev/docs/zh/connectors/sink/Jdbc.md)
+[Jdbc-SqlServer](https://github.com/apache/seatunnel/blob/dev/docs/en/connectors/sink/Jdbc.md)
+[StarRocks](https://github.com/apache/seatunnel/blob/dev/docs/zh/connectors/sink/StarRocks.md)
+[Doris](https://github.com/apache/seatunnel/blob/dev/docs/zh/connectors/sink/Doris.md)
+[Paimon](https://github.com/apache/seatunnel/blob/dev/docs/zh/connectors/sink/Paimon.md#模式演变)
+[Elasticsearch](https://github.com/apache/seatunnel/blob/dev/docs/zh/connectors/sink/Elasticsearch.md#模式演变)
注意:
* 目前模式演进不支持transform。不同类型数据库(Oracle-CDC -> Jdbc-Mysql)的模式演进目前不支持ddl中列的默认值。
diff --git a/docs/zh/introduction/configuration/sql-config.md
b/docs/zh/introduction/configuration/sql-config.md
index 5c160d4885..14480808df 100644
--- a/docs/zh/introduction/configuration/sql-config.md
+++ b/docs/zh/introduction/configuration/sql-config.md
@@ -122,7 +122,7 @@ CREATE TABLE sink_table WITH (
INSERT INTO sink_table SELECT id, name, age, email FROM source_table;
```
-* `SELECT FROM` 部分为源端映射表的表名,`SELECT`
部分的语法参考:[SQL-transform](../transform-v2/sql.md) `query`
配置项。如果select的字段是关键字([参考](https://github.com/JSQLParser/JSqlParser/blob/master/src/main/jjtree/net/sf/jsqlparser/parser/JSqlParserCC.jjt)),你应该像这样使用\`fieldName\`
+* `SELECT FROM` 部分为源端映射表的表名,`SELECT`
部分的语法参考:[SQL-transform](../../transforms/sql.md) `query`
配置项。如果select的字段是关键字([参考](https://github.com/JSQLParser/JSqlParser/blob/master/src/main/jjtree/net/sf/jsqlparser/parser/JSqlParserCC.jjt)),你应该像这样使用\`fieldName\`
```sql
INSERT INTO sink_table SELECT id, name, age, email,`output` FROM source_table;
```
@@ -178,7 +178,7 @@ CREATE TABLE temp1 AS SELECT id, name, age, email FROM
source_table;
```
* 该语法可以将一个`SELECT`查询结果作为一个临时表,用于的`INSERT INTO`操作
-* `SELECT` 部分的语法参考:[SQL Transform](../transform-v2/sql.md) `query` 配置项
+* `SELECT` 部分的语法参考:[SQL Transform](../transforms/sql.md) `query` 配置项
```sql
CREATE TABLE temp1 AS SELECT id, name, age, email FROM source_table;
diff --git a/docs/zh/transforms/llm.md b/docs/zh/transforms/llm.md
index d60e4ccb11..62b4942728 100644
--- a/docs/zh/transforms/llm.md
+++ b/docs/zh/transforms/llm.md
@@ -156,7 +156,7 @@ transform {
## tips
大模型API接口通常会有速率限制,可以配合Seatunnel的限速配置,已确保任务顺利运行。
-Seatunnel限速配置,请参考[speed-limit](../concept/speed-limit.md)了解详情
+Seatunnel限速配置,请参考[speed-limit](../introduction/concepts/speed-limit.md)了解详情
## 示例 OPENAI