This is an automated email from the ASF dual-hosted git repository.

zirui pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/inlong-website.git


The following commit(s) were added to refs/heads/master by this push:
     new fac34f5c13 [INLONG-580][Sort] Update metric option doc (#581)
fac34f5c13 is described below

commit fac34f5c1395e311aae79d6d3ab6f161eb7bf6f6
Author: Xin Gong <genzhedangd...@gmail.com>
AuthorDate: Mon Nov 7 17:07:40 2022 +0800

    [INLONG-580][Sort] Update metric option doc (#581)
---
 docs/data_node/extract_node/kafka.md                                | 2 +-
 docs/data_node/extract_node/mongodb-cdc.md                          | 3 +--
 docs/data_node/extract_node/mysql-cdc.md                            | 4 ++--
 docs/data_node/extract_node/oracle-cdc.md                           | 4 ++--
 docs/data_node/extract_node/postgresql-cdc.md                       | 2 +-
 docs/data_node/extract_node/pulsar.md                               | 2 +-
 docs/data_node/extract_node/sqlserver-cdc.md                        | 4 ++--
 docs/data_node/load_node/clickhouse.md                              | 2 +-
 docs/data_node/load_node/elasticsearch.md                           | 4 ++--
 docs/data_node/load_node/greenplum.md                               | 2 +-
 docs/data_node/load_node/hbase.md                                   | 2 +-
 docs/data_node/load_node/hdfs.md                                    | 4 ++--
 docs/data_node/load_node/hive.md                                    | 4 ++--
 docs/data_node/load_node/iceberg.md                                 | 2 +-
 docs/data_node/load_node/kafka.md                                   | 2 +-
 docs/data_node/load_node/mysql.md                                   | 2 +-
 docs/data_node/load_node/oracle.md                                  | 2 +-
 docs/data_node/load_node/postgresql.md                              | 2 +-
 docs/data_node/load_node/sqlserver.md                               | 2 +-
 docs/data_node/load_node/tdsql-postgresql.md                        | 2 +-
 docs/modules/sort/metrics.md                                        | 6 +++---
 .../version-1.3.0/data_node/extract_node/kafka.md                   | 2 +-
 .../version-1.3.0/data_node/extract_node/mongodb-cdc.md             | 2 +-
 .../version-1.3.0/data_node/extract_node/mysql-cdc.md               | 4 ++--
 .../version-1.3.0/data_node/extract_node/oracle-cdc.md              | 4 ++--
 .../version-1.3.0/data_node/extract_node/postgresql-cdc.md          | 2 +-
 .../version-1.3.0/data_node/extract_node/pulsar.md                  | 2 +-
 .../version-1.3.0/data_node/extract_node/sqlserver-cdc.md           | 4 ++--
 .../version-1.3.0/data_node/load_node/clickhouse.md                 | 2 +-
 .../version-1.3.0/data_node/load_node/elasticsearch.md              | 4 ++--
 .../version-1.3.0/data_node/load_node/greenplum.md                  | 2 +-
 .../version-1.3.0/data_node/load_node/hbase.md                      | 2 +-
 .../version-1.3.0/data_node/load_node/hdfs.md                       | 4 ++--
 .../version-1.3.0/data_node/load_node/hive.md                       | 4 ++--
 .../version-1.3.0/data_node/load_node/iceberg.md                    | 2 +-
 .../version-1.3.0/data_node/load_node/kafka.md                      | 2 +-
 .../version-1.3.0/data_node/load_node/mysql.md                      | 2 +-
 .../version-1.3.0/data_node/load_node/oracle.md                     | 2 +-
 .../version-1.3.0/data_node/load_node/postgresql.md                 | 2 +-
 .../version-1.3.0/data_node/load_node/sqlserver.md                  | 2 +-
 .../version-1.3.0/data_node/load_node/tdsql-postgresql.md           | 2 +-
 .../version-1.3.0/modules/sort/metrics.md                           | 6 +++---
 42 files changed, 58 insertions(+), 59 deletions(-)

diff --git a/docs/data_node/extract_node/kafka.md 
b/docs/data_node/extract_node/kafka.md
index d365c9241f..1a7a23b8ab 100644
--- a/docs/data_node/extract_node/kafka.md
+++ b/docs/data_node/extract_node/kafka.md
@@ -110,7 +110,7 @@ TODO: It will be supported in the future.
 | scan.startup.specific-offsets | optional | (none) | String | Specify offsets 
for each partition in case of 'specific-offsets' startup mode, e.g. 
'partition:0,offset:42;partition:1,offset:300'. |
 | scan.startup.timestamp-millis | optional | (none) | Long | Start from the 
specified epoch timestamp (milliseconds) used in case of 'timestamp' startup 
mode. |
 | scan.topic-partition-discovery.interval | optional | (none) | Duration | 
Interval for consumer to discover dynamically created Kafka topics and 
partitions periodically. |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 | sink.ignore.changelog | optional | false | Boolean |  Importing all 
changelog mode data ingest into Kafka . |
 
 ## Available Metadata
diff --git a/docs/data_node/extract_node/mongodb-cdc.md 
b/docs/data_node/extract_node/mongodb-cdc.md
index b3e9862f84..0f6f27f828 100644
--- a/docs/data_node/extract_node/mongodb-cdc.md
+++ b/docs/data_node/extract_node/mongodb-cdc.md
@@ -134,8 +134,7 @@ TODO: It will be supported in the future.
 | poll.max.batch.size       | optional     | 1000             | Integer  | 
Maximum number of change stream documents to include in a single batch when 
polling for new data. |
 | poll.await.time.ms        | optional     | 1500             | Integer  | The 
amount of time to wait before checking for new results on the change stream. |
 | heartbeat.interval.ms     | optional     | 0                | Integer  | The 
length of time in milliseconds between sending heartbeat messages. Use 0 to 
disa |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
-
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 ## Available Metadata
 
 The following format metadata can be exposed as read-only (VIRTUAL) columns in 
a table definition.
diff --git a/docs/data_node/extract_node/mysql-cdc.md 
b/docs/data_node/extract_node/mysql-cdc.md
index 0de9133506..d5694b9341 100644
--- a/docs/data_node/extract_node/mysql-cdc.md
+++ b/docs/data_node/extract_node/mysql-cdc.md
@@ -305,11 +305,11 @@ TODO: It will be supported in the future.
           See more about the <a 
href="https://debezium.io/documentation/reference/1.5/connectors/mysql.html#mysql-connector-properties";>Debezium's
 MySQL Connector properties</a></td> 
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is 
groupId&streamId&nodeId.</td> 
+      <td>Inlong metric label, format of value is 
groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
     </tr>
     </tbody>
 </table>
diff --git a/docs/data_node/extract_node/oracle-cdc.md 
b/docs/data_node/extract_node/oracle-cdc.md
index b98b89f767..f861091bcf 100644
--- a/docs/data_node/extract_node/oracle-cdc.md
+++ b/docs/data_node/extract_node/oracle-cdc.md
@@ -322,11 +322,11 @@ TODO: It will be supported in the future.
           See more about the <a 
href="https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-connector-properties";>Debezium's
 Oracle Connector properties</a></td> 
      </tr>
      <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is 
groupId&streamId&nodeId.</td> 
+      <td>Inlong metric label, format of value is 
groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
     </tr>
     </tbody>
 </table>    
diff --git a/docs/data_node/extract_node/postgresql-cdc.md 
b/docs/data_node/extract_node/postgresql-cdc.md
index f1e4de9fd2..9252e03906 100644
--- a/docs/data_node/extract_node/postgresql-cdc.md
+++ b/docs/data_node/extract_node/postgresql-cdc.md
@@ -114,7 +114,7 @@ TODO: It will be supported in the future.
 | decoding.plugin.name | optional | decoderbufs | String | The name of the 
Postgres logical decoding plug-in installed on the server. Supported values are 
decoderbufs, wal2json, wal2json_rds, wal2json_streaming, wal2json_rds_streaming 
and pgoutput. |
 | slot.name | optional | flink | String | The name of the PostgreSQL logical 
decoding slot that was created for streaming changes from a particular plug-in 
for a particular database/schema. The server uses this slot to stream events to 
the connector that you are configuring. Slot names must conform to PostgreSQL 
replication slot naming rules, which state: "Each replication slot has a name, 
which can contain lower-case letters, numbers, and the underscore character." |
 | debezium.* | optional | (none) | String | Pass-through Debezium's properties 
to Debezium Embedded Engine which is used to capture data changes from Postgres 
server. For example: 'debezium.snapshot.mode' = 'never'. See more about the 
[Debezium's Postgres Connector 
properties](https://debezium.io/documentation/reference/1.5/connectors/postgresql.html#postgresql-connector-properties).
 |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 **Note**: `slot.name` is recommended to set for different tables to avoid the 
potential PSQLException: ERROR: replication slot "flink" is active for PID 974 
error.  
 **Note**: PSQLException: ERROR: all replication slots are in use Hint: Free 
one or increase max_replication_slots. We can delete slot by the following 
statement.
diff --git a/docs/data_node/extract_node/pulsar.md 
b/docs/data_node/extract_node/pulsar.md
index 939266b7e2..5d28627b82 100644
--- a/docs/data_node/extract_node/pulsar.md
+++ b/docs/data_node/extract_node/pulsar.md
@@ -107,7 +107,7 @@ TODO
 | key.fields-prefix             | optional | (none)        | String | Define a 
custom prefix for all fields in the key format to avoid name conflicts with 
fields in the value format. By default, the prefix is empty. If a custom prefix 
is defined, the Table schema and `key.fields` are used. |
 | format or value.format        | required | (none)        | String | Set the 
name with a prefix. When constructing data types in the key format, the prefix 
is removed and non-prefixed names are used within the key format. Pulsar 
message value serialization format, support JSON, Avro, etc. For more 
information, see the Flink format. |
 | value.fields-include          | optional | ALL           | Enum   | The 
Pulsar message value contains the field policy, optionally ALL, and EXCEPT_KEY. 
|
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 ## Available Metadata
 
diff --git a/docs/data_node/extract_node/sqlserver-cdc.md 
b/docs/data_node/extract_node/sqlserver-cdc.md
index 54edd11a95..21377471e0 100644
--- a/docs/data_node/extract_node/sqlserver-cdc.md
+++ b/docs/data_node/extract_node/sqlserver-cdc.md
@@ -186,11 +186,11 @@ TODO
       <td>The session time zone in database server, e.g. "Asia/Shanghai".</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is 
groupId&streamId&nodeId.</td> 
+      <td>Inlong metric label, format of value is 
groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
     </tr>
     </tbody>
 </table>
diff --git a/docs/data_node/load_node/clickhouse.md 
b/docs/data_node/load_node/clickhouse.md
index b7fe70e041..dd63a0c75f 100644
--- a/docs/data_node/load_node/clickhouse.md
+++ b/docs/data_node/load_node/clickhouse.md
@@ -100,7 +100,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing 
records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of 
the JDBC sink operator. By default, the parallelism is determined by the 
framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, 
ingest them as `INSERT`. |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/elasticsearch.md 
b/docs/data_node/load_node/elasticsearch.md
index 67b678aca6..2178a2060a 100644
--- a/docs/data_node/load_node/elasticsearch.md
+++ b/docs/data_node/load_node/elasticsearch.md
@@ -250,11 +250,11 @@ TODO: It will be supported in the future.
       </td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is 
groupId&streamId&nodeId.</td> 
+      <td>Inlong metric label, format of value is 
groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
     </tr>
     </tbody>
 </table>
diff --git a/docs/data_node/load_node/greenplum.md 
b/docs/data_node/load_node/greenplum.md
index 20594c9bae..4803da8ed8 100644
--- a/docs/data_node/load_node/greenplum.md
+++ b/docs/data_node/load_node/greenplum.md
@@ -98,7 +98,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing 
records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of 
the JDBC sink operator. By default, the parallelism is determined by the 
framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, 
ingest them as `INSERT`. |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/hbase.md 
b/docs/data_node/load_node/hbase.md
index 68ff8f4e79..4ab9eb4210 100644
--- a/docs/data_node/load_node/hbase.md
+++ b/docs/data_node/load_node/hbase.md
@@ -96,7 +96,7 @@ TODO: It will be supported in the future.
 | lookup.cache.ttl | optional | (none) | Duration | The max time to live for 
each rows in lookup cache, over this time, the oldest rows will be expired. 
Note, "cache.max-rows" and "cache.ttl" options must all be specified if any of 
them is specified.Lookup cache is disabled by default. |
 | lookup.max-retries | optional | 3 | Integer | The max retry times if lookup 
database failed. |
 | properties.* | optional | (none) | String | This can set and pass arbitrary 
HBase configurations. Suffix names must match the configuration key defined in 
[HBase Configuration 
documentation](https://hbase.apache.org/2.3/book.html#hbase_default_configurations).
 Flink will remove the "properties." key prefix and pass the transformed key 
and values to the underlying HBaseClient. For example, you can add a kerberos 
authentication parameter 'properties.hbase.security.authentication' = 'kerb 
[...]
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/hdfs.md b/docs/data_node/load_node/hdfs.md
index 68f1ee9f64..5ac161be77 100644
--- a/docs/data_node/load_node/hdfs.md
+++ b/docs/data_node/load_node/hdfs.md
@@ -107,11 +107,11 @@ The file sink supports file compactions, which allows 
applications to have small
       <td>The compaction target file size, the default value is the rolling 
file size.</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is 
groupId&streamId&nodeId.</td> 
+      <td>Inlong metric label, format of value is 
groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
     </tr>
     </tbody>
 </table>
diff --git a/docs/data_node/load_node/hive.md b/docs/data_node/load_node/hive.md
index fc9261c5b8..d9723c3482 100644
--- a/docs/data_node/load_node/hive.md
+++ b/docs/data_node/load_node/hive.md
@@ -130,11 +130,11 @@ TODO: It will be supported in the future.
       Support to configure multiple policies: 'metastore,success-file'.</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is 
groupId&streamId&nodeId.</td> 
+      <td>Inlong metric label, format of value is 
groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
     </tr>
     </tbody>
 </table>
diff --git a/docs/data_node/load_node/iceberg.md 
b/docs/data_node/load_node/iceberg.md
index a839ce2021..4b15e24abd 100644
--- a/docs/data_node/load_node/iceberg.md
+++ b/docs/data_node/load_node/iceberg.md
@@ -162,7 +162,7 @@ TODO
 | clients          | optional for hive catalog                   | 2       | 
Integer | The Hive metastore client pool size, default value is 2.     |
 | warehouse        | optional for hadoop catalog or hive catalog | (none)  | 
String  | For Hive catalog,is the Hive warehouse location, users should specify 
this path if neither set the `hive-conf-dir` to specify a location containing a 
`hive-site.xml` configuration file nor add a correct `hive-site.xml` to 
classpath. For hadoop catalog,The HDFS directory to store metadata files and 
data files. |
 | hive-conf-dir    | optional for hive catalog                   | (none)  | 
String  | Path to a directory containing a `hive-site.xml` configuration file 
which will be used to provide custom Hive configuration values. The value of 
`hive.metastore.warehouse.dir` from `<hive-conf-dir>/hive-site.xml` (or hive 
configure file from classpath) will be overwrote with the `warehouse` value if 
setting both `hive-conf-dir` and `warehouse` when creating iceberg catalog. |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/kafka.md 
b/docs/data_node/load_node/kafka.md
index 04f16d1891..7a14a88eb5 100644
--- a/docs/data_node/load_node/kafka.md
+++ b/docs/data_node/load_node/kafka.md
@@ -94,7 +94,7 @@ TODO: It will be supported in the future.
 | sink.partitioner | optional | 'default' | String | Output partitioning from 
Flink's partitions into Kafka's partitions. Valid values are <br/>`default`: 
use the kafka default partitioner to partition records. <br/>`fixed`: each 
Flink partition ends up in at most one Kafka partition. <br/>`round-robin`: a 
Flink partition is distributed to Kafka partitions sticky round-robin. It only 
works when record's keys are not specified. Custom FlinkKafkaPartitioner 
subclass: e.g. 'org.mycompany.My [...]
 | sink.semantic | optional | at-least-once | String | Defines the delivery 
semantic for the Kafka sink. Valid enumerationns are 'at-least-once', 
'exactly-once' and 'none'. See [Consistency 
guarantees](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/table/kafka/#consistency-guarantees)
 for more details. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of 
the Kafka sink operator. By default, the parallelism is determined by the 
framework using the same parallelism of the upstream chained operator. |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 ## Available Metadata
 
diff --git a/docs/data_node/load_node/mysql.md 
b/docs/data_node/load_node/mysql.md
index a581ec3d8d..5639fc1396 100644
--- a/docs/data_node/load_node/mysql.md
+++ b/docs/data_node/load_node/mysql.md
@@ -97,7 +97,7 @@ TODO: It will be supported in the future.
 | sink.buffer-flush.interval | optional | 1s | Duration | The flush interval 
mills, over this time, asynchronous threads will flush data. Can be set to '0' 
to disable it. Note, 'sink.buffer-flush.max-rows' can be set to '0' with the 
flush interval set allowing for complete async processing of buffered actions. 
| |
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing 
records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of 
the JDBC sink operator. By default, the parallelism is determined by the 
framework using the same parallelism of the upstream chained operator. |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/oracle.md 
b/docs/data_node/load_node/oracle.md
index a7201772d4..f7c335ef6a 100644
--- a/docs/data_node/load_node/oracle.md
+++ b/docs/data_node/load_node/oracle.md
@@ -98,7 +98,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing 
records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of 
the JDBC sink operator. By default, the parallelism is determined by the 
framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, 
ingest them as `INSERT`. |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/postgresql.md 
b/docs/data_node/load_node/postgresql.md
index ef3ae7c049..69cf18a926 100644
--- a/docs/data_node/load_node/postgresql.md
+++ b/docs/data_node/load_node/postgresql.md
@@ -97,7 +97,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing 
records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of 
the JDBC sink operator. By default, the parallelism is determined by the 
framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, 
ingest them as `INSERT`. |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/sqlserver.md 
b/docs/data_node/load_node/sqlserver.md
index 1315f252c7..be6bc84eba 100644
--- a/docs/data_node/load_node/sqlserver.md
+++ b/docs/data_node/load_node/sqlserver.md
@@ -96,7 +96,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing 
records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of 
the JDBC sink operator. By default, the parallelism is determined by the 
framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, 
ingest them as `INSERT`. |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/tdsql-postgresql.md 
b/docs/data_node/load_node/tdsql-postgresql.md
index d737a296ec..d35fa0e347 100644
--- a/docs/data_node/load_node/tdsql-postgresql.md
+++ b/docs/data_node/load_node/tdsql-postgresql.md
@@ -96,7 +96,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing 
records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of 
the JDBC sink operator. By default, the parallelism is determined by the 
framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, 
ingest them as `INSERT`. |
-| inlong.metric | optional | (none) | String | Inlong metric label, format of 
value is groupId&streamId&nodeId. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, 
format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
 
 ## Data Type Mapping
 
diff --git a/docs/modules/sort/metrics.md b/docs/modules/sort/metrics.md
index d15ced5d1d..bf56b4ff3d 100644
--- a/docs/modules/sort/metrics.md
+++ b/docs/modules/sort/metrics.md
@@ -5,7 +5,7 @@ sidebar_position: 4
 
 ## Overview
 
-We add metric computing for node. Sort will compute metric when user just need 
add with option `inlong.metric` that includes `groupId&streamId&nodeId`.
+We add metric computing for node. Sort will compute metric when user just need 
add with option `inlong.metric.labels` that includes 
`groupId=xxgroup&streamId=xxstream&nodeId=xxnode`.
 Sort will export metric by flink metric group, So user can use [metric 
reporter](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/deployment/metric_reporters/)
 to get metric data.
 
 ## Metric
@@ -49,7 +49,7 @@ One example about sync mysql data to postgresql data. And We 
will introduce usag
         'scan.incremental.snapshot.enabled' = 'true',
         'server-time-zone' = 'GMT+8',
         'table-name' = 'user',
-        'inlong.metric' = 'mysqlGroup&mysqlStream&mysqlNode1'
+        'inlong.metric.labels' = 
'groupId=xxgroup&streamId=xxstream&nodeId=xxnode'
 );
 
  CREATE TABLE `table_groupId_streamId_nodeId2`(
@@ -63,7 +63,7 @@ One example about sync mysql data to postgresql data. And We 
will introduce usag
          'username' = 'postgres',
          'password' = 'inlong',
          'table-name' = 'public.user',
-         'inlong.metric' = 'pggroup&pgStream&pgNode'
+         'inlong.metric.labels' = 
'groupId=pggroup&streamId=pgStream&nodeId=pgNode'
          );
 
  INSERT INTO `table_groupId_streamId_nodeId2`
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/kafka.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/kafka.md
index af259afc3f..a9be3b3484 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/kafka.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/kafka.md
@@ -108,7 +108,7 @@ TODO: 将在未来支持此功能。
 | scan.startup.specific-offsets | 可选 | (none) | String | 在使用 
'specific-offsets' 启动模式时为每个 partition 指定 offset,例如 
'partition:0,offset:42;partition:1,offset:300'。 |
 | scan.startup.timestamp-millis | 可选 | (none) | Long | 在使用 'timestamp' 
启动模式时指定启动的时间戳(单位毫秒)。 |
 | scan.topic-partition-discovery.interval | 可选 | (none) | Duration | Consumer 
定期探测动态创建的 Kafka topic 和 partition 的时间间隔。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 | sink.ignore.changelog | 可选 | false | 布尔型 | 支持所有类型的 changelog 流 ingest 到 
Kafka。 |
 
 ## 可用的元数据字段
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/mongodb-cdc.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/mongodb-cdc.md
index 8a0a36f352..165f337267 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/mongodb-cdc.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/mongodb-cdc.md
@@ -134,7 +134,7 @@ TODO: 未来会支持
 | poll.max.batch.size       | 可选         | 1000       | Integer  | 
轮询新数据时,单个批次中包含的最大更改流文档数。             |
 | poll.await.time.ms        | 可选         | 1500       | Integer  | 
在更改流上检查新结果之前等待的时间量。                       |
 | heartbeat.interval.ms     | 可选         | 0          | Integer  | 
发送心跳消息之间的时间长度(以毫秒为单位)。使用 0 禁用。    |
-| inlong.metric             | 可选         | (none)     | String   | inlong 
metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels             | 可选         | (none)     | String   | 
inlong metric 的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 可用元数据
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/mysql-cdc.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/mysql-cdc.md
index 46b6e03b7a..4deb075d13 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/mysql-cdc.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/mysql-cdc.md
@@ -301,11 +301,11 @@ TODO: 将在未来支持此功能。
           详细了解 <a 
href="https://debezium.io/documentation/reference/1.5/connectors/mysql.html#mysql-connector-properties";>Debezium
 的 MySQL 连接器属性。</a></td> 
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。</td> 
     </tr>
     </tbody>
 </table>
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/oracle-cdc.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/oracle-cdc.md
index 565f3b4d53..3654076b51 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/oracle-cdc.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/oracle-cdc.md
@@ -323,11 +323,11 @@ Oracle CDC 消费者的可选启动模式,有效枚举为"initial"
           详细了解 <a 
href="https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-connector-properties";>Debezium
 的 Oracle 连接器属性</a></td> 
      </tr>
      <tr>
-       <td>inlong.metric</td>
+       <td>inlong.metric.labels</td>
        <td>可选</td>
        <td style={{wordWrap: 'break-word'}}>(none)</td>
        <td>String</td>
-       <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+       <td>inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。</td> 
      </tr>
     </tbody>
 </table>    
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/postgresql-cdc.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/postgresql-cdc.md
index 7ad1f01bcb..26fa165f7c 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/postgresql-cdc.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/postgresql-cdc.md
@@ -114,7 +114,7 @@ TODO: 将在未来支持此功能。
 | decoding.plugin.name | 可选 | decoderbufs | String | 服务器上安装的 Postgres 
逻辑解码插件的名称。 支持的值是 
decoderbufs、wal2json、wal2json_rds、wal2json_streaming、wal2json_rds_streaming 和 
pgoutput。 |
 | slot.name | 可选 | flink | String | PostgreSQL 
逻辑解码槽的名称,它是为从特定数据库/模式的特定插件流式传输更改而创建的。 服务器使用此插槽将事件流式传输到您正在配置的连接器。 插槽名称必须符合 
PostgreSQL 复制插槽命名规则,其中规定:“每个复制插槽都有一个名称,可以包含小写字母、数字和下划线字符。” |
 | debezium.* | 可选 | (none) | String | 将 Debezium 的属性传递给用于从 Postgres 服务器捕获数据更改的 
Debezium Embedded Engine。 例如:“debezium.snapshot.mode”=“never”。 查看更多关于 [Debezium 
的 Postgres 
连接器属性](https://debezium.io/documentation/reference/1.5/connectors/postgresql.html#postgresql-connector-properties)。
 |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 **Note**: `slot.name` 建议为不同的表设置以避免潜在的 PSQLException: ERROR: replication slot 
"flink" is active for PID 974 error。  
 **Note**: PSQLException: ERROR: all replication slots are in use Hint: Free 
one or increase max_replication_slots. 我们可以通过以下语句删除槽。  
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/pulsar.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/pulsar.md
index 7e8ffb86b9..13c54c003b 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/pulsar.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/pulsar.md
@@ -106,7 +106,7 @@ TODO
 | key.fields-prefix             | 可选     | (none)        | String | 为 key 
格式的所有字段定义自定义前缀,以避免与 value 格式的字段名称冲突。默认情况下,前缀为空。如果定义了自定义前缀,`key.fields`则使用表架构和。 |
 | format or value.format        | 必需     | (none)        | String | 
使用前缀设置名称。当以键格式构造数据类型时,前缀被移除,并且在键格式中使用非前缀名称。Pulsar 消息值序列化格式,支持 JSON、Avro 
等。更多信息请参见 Flink 格式。 |
 | value.fields-include          | 可选     | ALL           | Enum   | Pulsar 
消息值包含字段策略、可选的 ALL 和 EXCEPT_KEY。        |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 可用元数据
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/sqlserver-cdc.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/sqlserver-cdc.md
index d33648a996..3c90953ae2 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/sqlserver-cdc.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/extract_node/sqlserver-cdc.md
@@ -186,11 +186,11 @@ TODO
       <td>SQLServer 数据库连接配置时区。 例如: "Asia/Shanghai"。</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。</td> 
      </tr>
     </tbody>
 </table>
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/clickhouse.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/clickhouse.md
index 67cdf8d11e..e5a0f84b3e 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/clickhouse.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/clickhouse.md
@@ -98,7 +98,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 
算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 
INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 数据类型映射
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/elasticsearch.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/elasticsearch.md
index c967f1d609..b353f1aac6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/elasticsearch.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/elasticsearch.md
@@ -246,11 +246,11 @@ TODO: 将在未来支持这个特性。
       </td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。</td> 
      </tr>
     </tbody>
 </table>
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/greenplum.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/greenplum.md
index 5b886d7540..0d34e19826 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/greenplum.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/greenplum.md
@@ -96,7 +96,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 
算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 
INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 数据类型映射
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hbase.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hbase.md
index 644f2fdcfe..25a0c5d0f8 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hbase.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hbase.md
@@ -94,7 +94,7 @@ TODO: 将在未来支持此功能。
 | lookup.cache.ttl | 可选 | (none) | Duration | 
查找缓存中每一行的最大生存时间,在这段时间内,最老的行将过期。注意:"lookup.cache.max-rows" 和 "lookup.cache.ttl" 
必须同时被设置。默认情况下,查找缓存是禁用的。 |
 | lookup.max-retries | 可选 | 3 | Integer | 查找数据库失败时的最大重试次数。 |
 | properties.* | 可选 | (none) | String | 可以设置任意 HBase 的配置项。后缀名必须匹配在 [HBase 
配置文档](https://hbase.apache.org/2.3/book.html#hbase_default_configurations) 
中定义的配置键。Flink 将移除 "properties." 配置键前缀并将变换后的配置键和值传入底层的 HBase 客户端。 例如您可以设置 
'properties.hbase.security.authentication' = 'kerberos' 等kerberos认证参数。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 数据类型映射
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hdfs.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hdfs.md
index 7e48fd2896..3d724b6d99 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hdfs.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hdfs.md
@@ -109,11 +109,11 @@ CREATE TABLE hdfs_load_node (
       <td>合并目标文件大小,默认值为滚动文件大小。</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。</td> 
     </tr>
     </tbody>
 </table>
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hive.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hive.md
index 5f11e74986..e53a511a99 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hive.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/hive.md
@@ -128,11 +128,11 @@ TODO: 未来版本支持
       支持同时指定多个提交策略:'metastore,success-file'。</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。</td> 
      </tr>
     </tbody>
 </table>
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/iceberg.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/iceberg.md
index 930605b239..9dd0cb94b3 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/iceberg.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/iceberg.md
@@ -163,7 +163,7 @@ TODO
 | clients          | hive catalog可选                 | 2      | Integer | Hive 
Metastore 客户端池大小,默认值为 2                      |
 | warehouse        | hive catalog或hadoop catalog可选 | (none) | String  | 对于 
Hive 目录,是 Hive 
仓库位置,如果既不设置`hive-conf-dir`指定包含`hive-site.xml`配置文件的位置也不添加正确`hive-site.xml`的类路径,用户应指定此路径。对于hadoop目录,HDFS目录存放元数据文件和数据文件
 |
 | hive-conf-dir    | hive catalog可选                 | (none) | String  | 
`hive-site.xml`包含将用于提供自定义 Hive 
配置值的配置文件的目录的路径。如果同时设置和创建Iceberg目录时,`hive.metastore.warehouse.dir`from 
`<hive-conf-dir>/hive-site.xml`(或来自类路径的 hive 
配置文件)的值将被该值覆盖。`warehouse``hive-conf-dir``warehouse` |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 数据类型映射
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/kafka.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/kafka.md
index 37ce5f30d3..43130d3f7f 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/kafka.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/kafka.md
@@ -91,7 +91,7 @@ TODO: 将在未来支持此功能。
 | sink.partitioner | 可选 | 'default' | String | Flink partition 到 Kafka 
partition 的分区映射关系,可选值有:<br/>default:使用 Kafka 默认的分区器对消息进行分区。<br/>fixed:每个 Flink 
partition 最终对应最多一个 Kafka partition。<br/>round-robin:Flink partition 
按轮循(round-robin)的模式对应到 Kafka partition。只有当未指定消息的消息键时生效。<br/>自定义 
FlinkKafkaPartitioner 的子类:例如 'org.mycompany.MyPartitioner'。请参阅 [Sink 
分区](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/connectors/table/kafka/#sink-%E5%88%86%E5%8C%BA)
 以获取更多细节。 |
 | sink.semantic | 可选 | at-least-once | String | 定义 Kafka sink 的语义。有效值为 
'at-least-once','exactly-once' 和 'none'。请参阅 
[一致性保证](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/connectors/table/kafka/#%E4%B8%80%E8%87%B4%E6%80%A7%E4%BF%9D%E8%AF%81)
 以获取更多细节。 |
 | sink.parallelism | 可选 | (none) | Integer | 定义 Kafka sink 
算子的并行度。默认情况下,并行度由框架定义为与上游串联的算子相同。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 可用的元数据字段
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/mysql.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/mysql.md
index b64e43bcbb..11134a2a18 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/mysql.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/mysql.md
@@ -95,7 +95,7 @@ TODO: 将在未来支持此功能。
 | sink.buffer-flush.interval | 可选 | 1s | Duration | flush 间隔时间,超过该时间后异步线程将 
flush 数据。可以设置为 '0' 来禁用它。注意, 为了完全异步地处理缓存的 flush 事件,可以将 
'sink.buffer-flush.max-rows' 设置为 '0' 并配置适当的 flush 时间间隔。 |
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 
算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 数据类型映射
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/oracle.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/oracle.md
index c15eece9f5..66c78d817f 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/oracle.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/oracle.md
@@ -95,7 +95,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 
算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 
INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 数据类型映射
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/postgresql.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/postgresql.md
index 182d1c3e11..408e15845d 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/postgresql.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/postgresql.md
@@ -96,7 +96,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 
算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 
INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 数据类型映射
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/sqlserver.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/sqlserver.md
index 954909f618..d485115939 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/sqlserver.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/sqlserver.md
@@ -94,7 +94,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 
算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 
INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 数据映射
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/tdsql-postgresql.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/tdsql-postgresql.md
index 9c5e320f4a..744da9af36 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/tdsql-postgresql.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/data_node/load_node/tdsql-postgresql.md
@@ -94,7 +94,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 
算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 
INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 
的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|
 
 ## 数据类型映射
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/modules/sort/metrics.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/modules/sort/metrics.md
index 7f05d6e7cf..14306180c8 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/modules/sort/metrics.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.0/modules/sort/metrics.md
@@ -5,7 +5,7 @@ sidebar_position: 4
 
 ## 概览
 
-我们为节点增加了指标计算。 用户添加 with 选项 `inlong.metric` 后 Sort 会计算指标,`inlong.metric` 
选项的值由三部分构成:`groupId&streamId&nodeId`。
+我们为节点增加了指标计算。 用户添加 with 选项 `inlong.metric.labels` 后 Sort 
会计算指标,`inlong.metric.labels` 
选项的值由三部分构成:`groupId=xxgroup&streamId=xxstream&nodeId=xxnode`。
 用户可以使用 [metric 
reporter](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/deployment/metric_reporters/)
 去上报数据。
 
 ## 指标
@@ -49,7 +49,7 @@ sidebar_position: 4
         'scan.incremental.snapshot.enabled' = 'true',
         'server-time-zone' = 'GMT+8',
         'table-name' = 'user',
-        'inlong.metric' = 'mysqlGroup&mysqlStream&mysqlNode1'
+        'inlong.metric.labels' = 
'groupId=mysqlGroup&steamId=mysqlStream&nodeId=mysqlNode1'
 );
 
  CREATE TABLE `table_groupId_streamId_nodeId2`(
@@ -63,7 +63,7 @@ sidebar_position: 4
          'username' = 'postgres',
          'password' = 'inlong',
          'table-name' = 'public.user',
-         'inlong.metric' = 'pggroup&pgStream&pgNode'
+         'inlong.metric.labels' = 
'groupId=pggroup&streamId=pgStream&nodeId=pgNode'
          );
 
  INSERT INTO `table_groupId_streamId_nodeId2`

Reply via email to