wuchong commented on a change in pull request #9415: [FLINK-12939][docs-zh] 
Translate "Apache Kafka Connector" page into C…
URL: https://github.com/apache/flink/pull/9415#discussion_r312738501
 
 

 ##########
 File path: docs/dev/connectors/kafka.zh.md
 ##########
 @@ -197,41 +182,31 @@ stream = env
 </div>
 </div>
 
-### The `DeserializationSchema`
+### `DeserializationSchema`
 
-The Flink Kafka Consumer needs to know how to turn the binary data in Kafka 
into Java/Scala objects. The
-`DeserializationSchema` allows users to specify such a schema. The `T 
deserialize(byte[] message)`
-method gets called for each Kafka message, passing the value from Kafka.
+Flink Kafka Consumer 需要知道如何将 Kafka 中的二进制数据转换为 Java 或者 Scala 对象。 
`DeserializationSchema` 允许用户指定这样的 schema ,为每条 Kafka 消息调用 `T deserialize(byte[] 
message)` 方法,从 Kafka 中传递值。
 
-It is usually helpful to start from the `AbstractDeserializationSchema`, which 
takes care of describing the
-produced Java/Scala type to Flink's type system. Users that implement a 
vanilla `DeserializationSchema` need
-to implement the `getProducedType(...)` method themselves.
+从 `AbstractDeserializationSchema` 开始通常很有帮助,它负责将生成的 Java 或 Scala 类型描述为 Flink 
的类型系统。
+实现带有 vanilla `DeserializationSchema` 的用户需要自己实现 `getProducedType(...)` 方法。
 
-For accessing the key, value and metadata of the Kafka message, the 
`KafkaDeserializationSchema` has
-the following deserialize method `T deserialize(ConsumerRecord<byte[], byte[]> 
record)`.
+为了访问 Kafka 消息的 key 、 value 和元数据, `KafkaDeserializationSchema` 具有以下反序列化方法 `T 
deserialize(ConsumerRecord<byte[], byte[]> record)` 。
 
-For convenience, Flink provides the following schemas:
+为了方便使用, Flink 提供了以下几种 schemas :
 
-1. `TypeInformationSerializationSchema` (and 
`TypeInformationKeyValueSerializationSchema`) which creates
-    a schema based on a Flink's `TypeInformation`. This is useful if the data 
is both written and read by Flink.
-    This schema is a performant Flink-specific alternative to other generic 
serialization approaches.
+1. `TypeInformationSerializationSchema` (和 
`TypeInformationKeyValueSerializationSchema`) 基于 Flink 的 `TypeInformation` 创建 
`schema` 。
+    如果 Flink 既负责数据的读也负责写,那么这将是非常有用的。此 schema 是其他通用序列化方法的高性能 Flink 替代方案。
 
-2. `JsonDeserializationSchema` (and `JSONKeyValueDeserializationSchema`) which 
turns the serialized JSON
-    into an ObjectNode object, from which fields can be accessed using 
`objectNode.get("field").as(Int/String/...)()`.
-    The KeyValue objectNode contains a "key" and "value" field which contain 
all fields, as well as
-    an optional "metadata" field that exposes the offset/partition/topic for 
this message.
+2. `JsonDeserializationSchema` (和 `JSONKeyValueDeserializationSchema`) 将序列化的 
JSON 转化为 ObjectNode 对象,可以使用 `objectNode.get("field").as(Int/String/...)()` 
来访问某个字段。
+    KeyValue objectNode 包含一个含所有字段的 key 和 values 字段,以及公开此消息的 offset 或 partition 
或 topic 的可选“元数据”字段。
 
 Review comment:
   ```suggestion
       KeyValue objectNode 包含一个含所有字段的 key 和 values 
字段,以及一个可选的"metadata"字段,可以访问到消息的 offset、partition、topic 等信息。
   ```
   
   这里 "metadata" 是需要作为 key 去访问的,所以不建议翻译。

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to