libenchao commented on a change in pull request #12759:
URL: https://github.com/apache/flink/pull/12759#discussion_r449761475



##########
File path: docs/dev/table/connectors/index.zh.md
##########
@@ -25,108 +25,108 @@ under the License.
 -->
 
 
-Flink's Table API & SQL programs can be connected to other external systems 
for reading and writing both batch and streaming tables. A table source 
provides access to data which is stored in external systems (such as a 
database, key-value store, message queue, or file system). A table sink emits a 
table to an external storage system. Depending on the type of source and sink, 
they support different formats such as CSV, Avro, Parquet, or ORC.
+Flink Table 和 SQL 可以连接到外部系统进行批和流的读写。table source 
用于读取存储在外部系统(例如数据库、键值存储、消息队列或者文件系统)中的数据。table sink 可以将表存储到另一个外部的系统中。不同的 source 和 
sink 支持不同的数据格式,例如 CSV、Avro、Parquet 或者 ORC。
 
-This page describes how to register table sources and table sinks in Flink 
using the natively supported connectors. After a source or sink has been 
registered, it can be accessed by Table API & SQL statements.
+本文档主要描述如何使用内置支持的连接器(connector)注册 table source 和 sink。在 source 或 sink 
注册完成之后,就可以在 Table API 和 SQL 中访问它们了。
 
-<span class="label label-info">NOTE</span> If you want to implement your own 
*custom* table source or sink, have a look at the [user-defined sources & sinks 
page]({% link dev/table/sourceSinks.zh.md %}).
+<span class="label label-info">注意</span> 如果你想要实现自定义 table source 或 sink, 可以查看 
[自定义 source 和 sink]({% link dev/table/sourceSinks.zh.md %})。
 
-<span class="label label-danger">Attention</span> Flink Table & SQL introduces 
a new set of connector options since 1.11.0, if you are using the legacy 
connector options, please refer to the [legacy documentation]({% link 
dev/table/connect.zh.md %}).
+<span class="label label-danger">注意</span> Flink Table & SQL 在 1.11.0 
之后引入了一组新的连接器选项, 如果你现在还在使用遗留(legacy)连接器选项,可以查阅 [遗留文档]({% link 
dev/table/connect.zh.md %})。
 
 * This will be replaced by the TOC
 {:toc}
 
-Supported Connectors
+已经支持的连接器
 ------------
 
-Flink natively support various connectors. The following tables list all 
available connectors.
+Flink 原生支持各种不同的连接器。下表列出了所有可用的连接器。
 
 <table class="table table-bordered">
     <thead>
       <tr>
-        <th class="text-left">Name</th>
-        <th class="text-center">Version</th>
-        <th class="text-center">Source</th>
-        <th class="text-center">Sink</th>
+        <th class="text-left">名称</th>
+        <th class="text-center">版本</th>
+        <th class="text-center">源端</th>
+        <th class="text-center">目标端</th>
       </tr>
     </thead>
     <tbody>
     <tr>
       <td><a href="{% link dev/table/connectors/filesystem.zh.md 
%}">Filesystem</a></td>
       <td></td>
-      <td>Bounded and Unbounded Scan, Lookup</td>
-      <td>Streaming Sink, Batch Sink</td>
+      <td>有界和无界的扫描和查询</td>
+      <td>流式,批处理</td>
     </tr>
     <tr>
       <td><a href="{% link dev/table/connectors/elasticsearch.zh.md 
%}">Elasticsearch</a></td>
       <td>6.x & 7.x</td>
-      <td>Not supported</td>
-      <td>Streaming Sink, Batch Sink</td>
+      <td>不支持</td>
+      <td>流式,批处理</td>
     </tr>
     <tr>
       <td><a href="{% link dev/table/connectors/kafka.zh.md %}">Apache 
Kafka</a></td>
       <td>0.10+</td>
-      <td>Unbounded Scan</td>
-      <td>Streaming Sink, Batch Sink</td>
+      <td>无界的扫描</td>
+      <td>流式,批处理</td>
     </tr>
     <tr>
       <td><a href="{% link dev/table/connectors/jdbc.zh.md %}">JDBC</a></td>
       <td></td>
-      <td>Bounded Scan, Lookup</td>
-      <td>Streaming Sink, Batch Sink</td>
+      <td>有界的扫描和查询</td>
+      <td>流式,批处理</td>
     </tr>
     <tr>
       <td><a href="{% link dev/table/connectors/hbase.zh.md %}">Apache 
HBase</a></td>
       <td>1.4.x</td>
-      <td>Bounded Scan, Lookup</td>
-      <td>Streaming Sink, Batch Sink</td>
+      <td>有界的扫描和查询</td>
+      <td>流式,批处理</td>
     </tr>
     </tbody>
 </table>
 
 {% top %}
 
-How to use connectors
+如何使用连接器
 --------
 
-Flink supports to use SQL CREATE TABLE statement to register a table. One can 
define the table name, the table schema, and the table options for connecting 
to an external system.
+FLink 支持使用 SQL 建表语句来进行注册 table。可以定义表的名称、表结构、以及连接外部系统用的一些选项。
 
-The following code shows a full example of how to connect to Kafka for reading 
Json records.
+下面的代码展示了如何读取 Kafka 并且用 Json 解析数据的一个完整的例子。
 
 <div class="codetabs" markdown="1">
 <div data-lang="SQL" markdown="1">
 {% highlight sql %}
 CREATE TABLE MyUserTable (
-  -- declare the schema of the table
+  -- 声明表结构
   `user` BIGINT,
   message STRING,
   ts TIMESTAMP,
-  proctime AS PROCTIME(), -- use computed column to define proctime attribute
-  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to 
define rowtime attribute
+  proctime AS PROCTIME(), -- 使用计算列定义处理时间属性
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- 使用 WATERMARK 语句定义事件时间属性
 ) WITH (
-  -- declare the external system to connect to
+  -- 声明要连接的外部系统
   'connector' = 'kafka',
   'topic' = 'topic_name',
   'scan.startup.mode' = 'earliest-offset',
   'properties.bootstrap.servers' = 'localhost:9092',
-  'format' = 'json'   -- declare a format for this system
+  'format' = 'json'   -- 声明此系统的格式
 )
 {% endhighlight %}
 </div>
 </div>
 
-In this way the desired connection properties are converted into string-based 
key-value pairs. So-called [table factories]({% link 
dev/table/sourceSinks.zh.md %}#define-a-tablefactory) create configured table 
sources, table sinks, and corresponding formats from the key-value pairs. All 
table factories that can be found via Java's [Service Provider Interfaces 
(SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken 
into account when searching for exactly-one matching table factory.
+使用普遍性连接参数已经转变成了基于字符串的键值对。 其实表工厂 [table factories]({% link 
dev/table/sourceSinks.zh.md %}#define-a-tablefactory) 利用对应的键值对来创建相应的 table 
源端,table 目标端,和相应的格式。所有的表工厂可以在 Java’s Service Provider Interfaces (SPI) 里面被找到。 
现在已经经可能的考虑到所有各类涉及到的表对应匹配的表工厂设计。

Review comment:
       感觉这个翻译可以稍微润色一下,语句读着稍微有些不太通顺,如果直译不太好翻译可以采用意译。例如:  
   `In this way the desired connection properties are converted into 
string-based key-value pairs. So-called [table factories]({% link 
dev/table/sourceSinks.zh.md %}#define-a-tablefactory) create configured table 
sources, table sinks, and corresponding formats from the key-value pairs.` ->
   `连接器的属性是以字符串键值对来配置的,[表工厂]({% link dev/table/sourceSinks.zh.md 
%}#define-a-tablefactory)根据配置的键值对来构造对应的数据源、数据汇以及对应的对应的格式`

##########
File path: docs/dev/table/connectors/index.zh.md
##########
@@ -25,108 +25,108 @@ under the License.
 -->
 
 
-Flink's Table API & SQL programs can be connected to other external systems 
for reading and writing both batch and streaming tables. A table source 
provides access to data which is stored in external systems (such as a 
database, key-value store, message queue, or file system). A table sink emits a 
table to an external storage system. Depending on the type of source and sink, 
they support different formats such as CSV, Avro, Parquet, or ORC.
+Flink Table 和 SQL 可以连接到外部系统进行批和流的读写。table source 
用于读取存储在外部系统(例如数据库、键值存储、消息队列或者文件系统)中的数据。table sink 可以将表存储到另一个外部的系统中。不同的 source 和 
sink 支持不同的数据格式,例如 CSV、Avro、Parquet 或者 ORC。
 
-This page describes how to register table sources and table sinks in Flink 
using the natively supported connectors. After a source or sink has been 
registered, it can be accessed by Table API & SQL statements.
+本文档主要描述如何使用内置支持的连接器(connector)注册 table source 和 sink。在 source 或 sink 
注册完成之后,就可以在 Table API 和 SQL 中访问它们了。
 
-<span class="label label-info">NOTE</span> If you want to implement your own 
*custom* table source or sink, have a look at the [user-defined sources & sinks 
page]({% link dev/table/sourceSinks.zh.md %}).
+<span class="label label-info">注意</span> 如果你想要实现自定义 table source 或 sink, 可以查看 
[自定义 source 和 sink]({% link dev/table/sourceSinks.zh.md %})。
 
-<span class="label label-danger">Attention</span> Flink Table & SQL introduces 
a new set of connector options since 1.11.0, if you are using the legacy 
connector options, please refer to the [legacy documentation]({% link 
dev/table/connect.zh.md %}).
+<span class="label label-danger">注意</span> Flink Table & SQL 在 1.11.0 
之后引入了一组新的连接器选项, 如果你现在还在使用遗留(legacy)连接器选项,可以查阅 [遗留文档]({% link 
dev/table/connect.zh.md %})。
 
 * This will be replaced by the TOC
 {:toc}
 
-Supported Connectors
+已经支持的连接器
 ------------
 
-Flink natively support various connectors. The following tables list all 
available connectors.
+Flink 原生支持各种不同的连接器。下表列出了所有可用的连接器。
 
 <table class="table table-bordered">
     <thead>
       <tr>
-        <th class="text-left">Name</th>
-        <th class="text-center">Version</th>
-        <th class="text-center">Source</th>
-        <th class="text-center">Sink</th>
+        <th class="text-left">名称</th>
+        <th class="text-center">版本</th>
+        <th class="text-center">源端</th>
+        <th class="text-center">目标端</th>
       </tr>
     </thead>
     <tbody>
     <tr>
       <td><a href="{% link dev/table/connectors/filesystem.zh.md 
%}">Filesystem</a></td>
       <td></td>
-      <td>Bounded and Unbounded Scan, Lookup</td>
-      <td>Streaming Sink, Batch Sink</td>
+      <td>有界和无界的扫描和查询</td>
+      <td>流式,批处理</td>
     </tr>
     <tr>
       <td><a href="{% link dev/table/connectors/elasticsearch.zh.md 
%}">Elasticsearch</a></td>
       <td>6.x & 7.x</td>
-      <td>Not supported</td>
-      <td>Streaming Sink, Batch Sink</td>
+      <td>不支持</td>
+      <td>流式,批处理</td>
     </tr>
     <tr>
       <td><a href="{% link dev/table/connectors/kafka.zh.md %}">Apache 
Kafka</a></td>
       <td>0.10+</td>
-      <td>Unbounded Scan</td>
-      <td>Streaming Sink, Batch Sink</td>
+      <td>无界的扫描</td>
+      <td>流式,批处理</td>
     </tr>
     <tr>
       <td><a href="{% link dev/table/connectors/jdbc.zh.md %}">JDBC</a></td>
       <td></td>
-      <td>Bounded Scan, Lookup</td>
-      <td>Streaming Sink, Batch Sink</td>
+      <td>有界的扫描和查询</td>
+      <td>流式,批处理</td>
     </tr>
     <tr>
       <td><a href="{% link dev/table/connectors/hbase.zh.md %}">Apache 
HBase</a></td>
       <td>1.4.x</td>
-      <td>Bounded Scan, Lookup</td>
-      <td>Streaming Sink, Batch Sink</td>
+      <td>有界的扫描和查询</td>
+      <td>流式,批处理</td>
     </tr>
     </tbody>
 </table>
 
 {% top %}
 
-How to use connectors
+如何使用连接器
 --------
 
-Flink supports to use SQL CREATE TABLE statement to register a table. One can 
define the table name, the table schema, and the table options for connecting 
to an external system.
+FLink 支持使用 SQL 建表语句来进行注册 table。可以定义表的名称、表结构、以及连接外部系统用的一些选项。
 
-The following code shows a full example of how to connect to Kafka for reading 
Json records.
+下面的代码展示了如何读取 Kafka 并且用 Json 解析数据的一个完整的例子。
 
 <div class="codetabs" markdown="1">
 <div data-lang="SQL" markdown="1">
 {% highlight sql %}
 CREATE TABLE MyUserTable (
-  -- declare the schema of the table
+  -- 声明表结构
   `user` BIGINT,
   message STRING,
   ts TIMESTAMP,
-  proctime AS PROCTIME(), -- use computed column to define proctime attribute
-  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to 
define rowtime attribute
+  proctime AS PROCTIME(), -- 使用计算列定义处理时间属性
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- 使用 WATERMARK 语句定义事件时间属性
 ) WITH (
-  -- declare the external system to connect to
+  -- 声明要连接的外部系统
   'connector' = 'kafka',
   'topic' = 'topic_name',
   'scan.startup.mode' = 'earliest-offset',
   'properties.bootstrap.servers' = 'localhost:9092',
-  'format' = 'json'   -- declare a format for this system
+  'format' = 'json'   -- 声明此系统的格式
 )
 {% endhighlight %}
 </div>
 </div>
 
-In this way the desired connection properties are converted into string-based 
key-value pairs. So-called [table factories]({% link 
dev/table/sourceSinks.zh.md %}#define-a-tablefactory) create configured table 
sources, table sinks, and corresponding formats from the key-value pairs. All 
table factories that can be found via Java's [Service Provider Interfaces 
(SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken 
into account when searching for exactly-one matching table factory.
+使用普遍性连接参数已经转变成了基于字符串的键值对。 其实表工厂 [table factories]({% link 
dev/table/sourceSinks.zh.md %}#define-a-tablefactory) 利用对应的键值对来创建相应的 table 
源端,table 目标端,和相应的格式。所有的表工厂可以在 Java’s Service Provider Interfaces (SPI) 里面被找到。 
现在已经经可能的考虑到所有各类涉及到的表对应匹配的表工厂设计。

Review comment:
       下面的部分也是同理,我就不一一列举了。




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to