wuchong commented on a change in pull request #9415: [FLINK-12939][docs-zh] 
Translate "Apache Kafka Connector" page into C…
URL: https://github.com/apache/flink/pull/9415#discussion_r312738797
 
 

 ##########
 File path: docs/dev/connectors/kafka.zh.md
 ##########
 @@ -342,74 +300,51 @@ 
myConsumer.setStartFromSpecificOffsets(specificStartOffsets)
 </div>
 </div>
 
-The above example configures the consumer to start from the specified offsets 
for
-partitions 0, 1, and 2 of topic `myTopic`. The offset values should be the
-next record that the consumer should read for each partition. Note that
-if the consumer needs to read a partition which does not have a specified
-offset within the provided offsets map, it will fallback to the default
-group offsets behaviour (i.e. `setStartFromGroupOffsets()`) for that
-particular partition.
+上面的例子中使用的配置是指定从 `myTopic` 主题的 0 、1 和 2 分区的指定偏移量开始消费。 offset 值是 consumer 
应该为每个分区读取的下一条消息。请注意:如果 consumer 需要读取在提供的 offsets 映射中没有指定 offsets 
的分区,那么它将回退到该特定分区的默认组偏移行为(即 `setStartFromGroupOffsets()` )。
+
 
-Note that these start position configuration methods do not affect the start 
position when the job is
-automatically restored from a failure or manually restored using a savepoint.
-On restore, the start position of each Kafka partition is determined by the
-offsets stored in the savepoint or checkpoint
-(please see the next section for information about checkpointing to enable
-fault tolerance for the consumer).
+请注意:当 Job 从故障中自动恢复或使用 savepoint 手动恢复时,这些起始位置配置方法不会影响消费的起始位置。在恢复时,每个 Kafka 
分区的起始位置由存储在 savepoint 或 checkpoint 中的 offsets 确定(有关 checkpointing 
的信息,请参阅下一节,以便为 consumer 启用容错功能)。
 
-### Kafka Consumers and Fault Tolerance
+### Kafka Consumers 和容错
 
-With Flink's checkpointing enabled, the Flink Kafka Consumer will consume 
records from a topic and periodically checkpoint all
-its Kafka offsets, together with the state of other operations, in a 
consistent manner. In case of a job failure, Flink will restore
-the streaming program to the state of the latest checkpoint and re-consume the 
records from Kafka, starting from the offsets that were
-stored in the checkpoint.
+伴随着启用 Flink 的 checkpointing 后,Flink Kafka Consumer 将使用 topic 
中的记录,并以一致的方式定期检查其所有 Kafka offsets 和其他算子的状态。``如果 Job 失败, Flink 会将流式程序恢复到最新 
checkpoint 的状态,并从存储在 checkpoint 中的 offsets 开始重新消费 Kafka 中的消息。
 
-The interval of drawing checkpoints therefore defines how much the program may 
have to go back at most, in case of a failure.
+因此,设置 checkpoints 的间隔定义了程序在发生故障时最多需要返回多少。
 
-To use fault tolerant Kafka Consumers, checkpointing of the topology needs to 
be enabled at the execution environment:
+要使用容错的 Kafka Consumers ,需要在执行环境中启用拓扑的 checkpointing 。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 {% highlight java %}
 final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
-env.enableCheckpointing(5000); // checkpoint every 5000 msecs
+env.enableCheckpointing(5000); // 每隔 5000 毫秒 执行一次 checkpoint
 {% endhighlight %}
 </div>
 <div data-lang="scala" markdown="1">
 {% highlight scala %}
 val env = StreamExecutionEnvironment.getExecutionEnvironment()
-env.enableCheckpointing(5000) // checkpoint every 5000 msecs
+env.enableCheckpointing(5000) // 每隔 5000 毫秒 执行一次 checkpoint
 {% endhighlight %}
 </div>
 </div>
 
-Also note that Flink can only restart the topology if enough processing slots 
are available to restart the topology.
-So if the topology fails due to loss of a TaskManager, there must still be 
enough slots available afterwards.
-Flink on YARN supports automatic restart of lost YARN containers.
+另请注意,如果有足够的处理 solts 是可用于重新启动拓扑的,那么 Flink 只能重新启动拓扑计划。因此,如果拓扑由于丢失了 TaskManager 
而失败,那么之后必须要一直有足够可用的 solts 。 Flink on YARN 支持自动重启丢失的 YARN 容器。
 
 Review comment:
   ```suggestion
   另请注意,Flink 只有在有足够的可用 solts 情况下才重新启动拓扑。因此,如果拓扑由于丢失了 TaskManager 
而失败,那么之后必须要有足够可用的 solts 。 Flink on YARN 支持自动重启丢失的 YARN 容器。
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to