guoweiM commented on a change in pull request #14077: URL: https://github.com/apache/flink/pull/14077#discussion_r524240775
########## File path: docs/dev/connectors/file_sink.zh.md ########## @@ -329,18 +329,13 @@ stream.sinkTo(FileSink.forBulkFormat( #### ORC Format -To enable the data to be bulk encoded in ORC format, Flink offers [OrcBulkWriterFactory]({{ site.javadocs_baseurl }}/api/java/org/apache/flink/formats/orc/writers/OrcBulkWriterFactory.html) -which takes a concrete implementation of [Vectorizer]({{ site.javadocs_baseurl }}/api/java/org/apache/flink/orc/vector/Vectorizer.html). +为了使用基于批量编码的 ORC 格式,Flink提供了 [OrcBulkWriterFactory]({{ site.javadocs_baseurl }}/api/java/org/apache/flink/formats/orc/writers/OrcBulkWriterFactory.html) ,它需要用户提供一个 [Vectorizer]({{ site.javadocs_baseurl }}/api/java/org/apache/flink/orc/vector/Vectorizer.html) 的具体实现。 -Like any other columnar format that encodes data in bulk fashion, Flink's `OrcBulkWriter` writes the input elements in batches. It uses -ORC's `VectorizedRowBatch` to achieve this. +和其它基于列式存储的批量编码格式类似,Flink中的 `OrcBulkWriter` 将数据按批写出,它通过 ORC 的 VectorizedRowBatch 来实现这一点。 Review comment: ,---->。 ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org