wuchong commented on a change in pull request #9415: [FLINK-12939][docs-zh] 
Translate "Apache Kafka Connector" page into C…
URL: https://github.com/apache/flink/pull/9415#discussion_r312739159
 
 

 ##########
 File path: docs/dev/connectors/kafka.zh.md
 ##########
 @@ -448,61 +383,36 @@ val stream = env.addSource(myConsumer)
 </div>
 </div>
 
-In the above example, all topics with names that match the specified regular 
expression
-(starting with `test-topic-` and ending with a single digit) will be 
subscribed by the consumer
-when the job starts running.
-
-To allow the consumer to discover dynamically created topics after the job 
started running,
-set a non-negative value for `flink.partition-discovery.interval-millis`. This 
allows
-the consumer to discover partitions of new topics with names that also match 
the specified
-pattern.
-
-### Kafka Consumers Offset Committing Behaviour Configuration
-
-The Flink Kafka Consumer allows configuring the behaviour of how offsets
-are committed back to Kafka brokers (or Zookeeper in 0.8). Note that the
-Flink Kafka Consumer does not rely on the committed offsets for fault
-tolerance guarantees. The committed offsets are only a means to expose
-the consumer's progress for monitoring purposes.
-
-The way to configure offset commit behaviour is different, depending on
-whether or not checkpointing is enabled for the job.
-
- - *Checkpointing disabled:* if checkpointing is disabled, the Flink Kafka
- Consumer relies on the automatic periodic offset committing capability
- of the internally used Kafka clients. Therefore, to disable or enable offset
- committing, simply set the `enable.auto.commit` (or `auto.commit.enable`
- for Kafka 0.8) / `auto.commit.interval.ms` keys to appropriate values
- in the provided `Properties` configuration.
-
- - *Checkpointing enabled:* if checkpointing is enabled, the Flink Kafka
- Consumer will commit the offsets stored in the checkpointed states when
- the checkpoints are completed. This ensures that the committed offsets
- in Kafka brokers is consistent with the offsets in the checkpointed states.
- Users can choose to disable or enable offset committing by calling the
- `setCommitOffsetsOnCheckpoints(boolean)` method on the consumer (by default,
- the behaviour is `true`).
- Note that in this scenario, the automatic periodic offset committing
- settings in `Properties` is completely ignored.
-
-### Kafka Consumers and Timestamp Extraction/Watermark Emission
-
-In many scenarios, the timestamp of a record is embedded (explicitly or 
implicitly) in the record itself.
-In addition, the user may want to emit watermarks either periodically, or in 
an irregular fashion, e.g. based on
-special records in the Kafka stream that contain the current event-time 
watermark. For these cases, the Flink Kafka
-Consumer allows the specification of an `AssignerWithPeriodicWatermarks` or an 
`AssignerWithPunctuatedWatermarks`.
-
-You can specify your custom timestamp extractor/watermark emitter as described
-[here]({{ site.baseurl }}/dev/event_timestamps_watermarks.html), or use one 
from the
-[predefined ones]({{ site.baseurl }}/dev/event_timestamp_extractors.html). 
After doing so, you
-can pass it to your consumer in the following way:
+在上面的例子中,当 Job 开始运行时, Consumer 将订阅名称与指定正则表达式匹配的所有主题(以 `test-topic` 开头并以单个数字结尾)。
+
+要允许 consumer 在作业开始运行后发现动态创建的主题,那么请为 
`flink.partition-discovery.interval-millis` 设置非负值。 这允许 consumer 
发现名称与指定模式匹配的新主题的分区。
+
+### Kafka Consumers 提交 Offset 的行为配置
+
+Flink Kafka Consumer 允许有配置如何将 offsets 提交回 Kafka broker (或 0.8 版本的 Zookeeper 
)的行为。请注意: Flink Kafka Consumer 不依赖于提交的 offsets 来实现容错保证。提交的 offsets 只是一种方法,用于公开 
consumer 的进度以便进行监控。
+
+配置 offset 提交行为的方法是否相同,取决于是否为 job 启用了 checkpointing 。
+
+ - *禁用 Checkpointing :* 如果禁用了 checkpointing ,则 Flink Kafka Consumer 依赖于内部使用的 
Kafka client 自动定期 offset 提交功能。
+ 因此,要禁用或启用 offset 的提交,只需将 `enable.auto.commit`(或 Kafka 0.8 的 
`auto.commit.enable` )或者 `auto.commit.interval.ms` 的Key 值设置为提供的 `Properties` 
配置中的适当值。
+
+ - *启用 Checkpointing :* 如果启用了 checkpointing ,那么当 checkpointing 完成时, Flink 
Kafka Consumer 将提交的 offset 存储在 checkpoints 状态中。
+ 这确保 Kafka broker 中提交的 offset 与 checkpoints 状态中的 offset 一致。
+ 用户可以通过调用 consumer 上的 `setCommitOffsetsOnCheckpoints(boolean)` 方法来禁用或启用 offset 
的提交(默认情况下,这个值是 true )。
+ 注意,在这个场景中,完全忽略 `Properties` 中的自动定期 offset 提交设置。
+
+### Kafka Consumers 和 Timestamp extractor 或者 Watermark Emission
+
+在许多场景中,记录的时间戳(显式或隐式)嵌入到记录本身中。此外,用户可能希望定期或以不规则的方式 Watermark Emission ,例如基于 
Kafka 流中包含当前事件时间水位线的特殊记录。对于这些情况, Flink Kafka Consumer 允许指定 
`AssignerWithPeriodicWatermarks` 或 `AssignerWithPunctuatedWatermarks`。
+
+你可以按照[此处]({{ site.baseurl }}/dev/event_timestamps_watermarks.html)的说明指定自定义 
timestamp extractor 或者 Watermark Emission ,或者使用 [predefined ones]({{ 
site.baseurl }}/dev/event_timestamp_extractors.html)。你也可以通过以下方式将其传递给你的 consumer 
:
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 {% highlight java %}
 Properties properties = new Properties();
 properties.setProperty("bootstrap.servers", "localhost:9092");
-// only required for Kafka 0.8
+// 仅限于 Kafka 0.8 使用
 
 Review comment:
   ```suggestion
   // 仅 Kafka 0.8 需要
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to