Sxnan commented on a change in pull request #11167: [FLINK-16129] [docs] Translate /ops/filesystems/index.zh.md URL: https://github.com/apache/flink/pull/11167#discussion_r382432036
########## File path: docs/ops/filesystems/index.zh.md ########## @@ -24,104 +24,82 @@ specific language governing permissions and limitations under the License. --> -Apache Flink uses file systems to consume and persistently store data, both for the results of applications and for fault tolerance and recovery. -These are some of most of the popular file systems, including *local*, *hadoop-compatible*, *Amazon S3*, *MapR FS*, *OpenStack Swift FS*, *Aliyun OSS* and *Azure Blob Storage*. +Apache Flink 使用文件系统来消耗和持久化地存储数据,以处理应用结果以及容错与恢复。以下是一些最常用的文件系统:*本地存储*,*hadoop-compatible*,*Amazon S3*,*MapR FS*,*OpenStack Swift FS*,*阿里云 OSS* 和 *Azure Blob Storage*。 -The file system used for a particular file is determined by its URI scheme. -For example, `file:///home/user/text.txt` refers to a file in the local file system, while `hdfs://namenode:50010/data/user/text.txt` is a file in a specific HDFS cluster. +文件使用的文件系统通过其 URI Scheme 指定。例如 `file:///home/user/text.txt` 表示一个在本地文件系统中的文件,`hdfs://namenode:50010/data/user/text.txt` 表示一个在指定 HDFS 集群中的文件。 -File system instances are instantiated once per process and then cached/pooled, to avoid configuration overhead per stream creation and to enforce certain constraints, such as connection/stream limits. +文件系统在每个进程实例化一次,然后进行缓存/池化,从而避免每次创建流时的配置开销,并强制执行特定的约束,如连接/流的限制。 * This will be replaced by the TOC {:toc} -## Local File System +## 本地文件系统 -Flink has built-in support for the file system of the local machine, including any NFS or SAN drives mounted into that local file system. -It can be used by default without additional configuration. Local files are referenced with the *file://* URI scheme. +Flink 原生支持本地机器上的文件系统,包括任何挂载到本地文件系统的 NFS 或 SAN 驱动器,默认即可使用,无需额外配置。本地文件可通过 *file://* URI Scheme 引用。 -## Pluggable File Systems +## 外部文件系统 -The Apache Flink project supports the following file systems: +Apache Flink 支持下列文件系统: + - [**Amazon S3**](./s3.html) 对象存储由 `flink-s3-fs-presto` 和 `flink-s3-fs-hadoop` 两种替代实现提供支持。这两种实现都是独立的,没有依赖项。 - - [**Amazon S3**](./s3.html) object storage is supported by two alternative implementations: `flink-s3-fs-presto` and `flink-s3-fs-hadoop`. - Both implementations are self-contained with no dependency footprint. + - **MapR FS** 文件系统适配器已在 Flink 的主发行版中通过 *maprfs://* URI Scheme 支持。MapR 库需要在 classpath 中指定(例如在 `lib` 目录中)。 - - **MapR FS** file system adapter is already supported in the main Flink distribution under the *maprfs://* URI scheme. - You must provide the MapR libraries in the classpath (for example in `lib` directory). + - **OpenStack Swift FS** 由 `flink-swift-fs-hadoop` 支持,并通过 *swift://* URI scheme 使用。该实现基于 [Hadoop Project](https://hadoop.apache.org/),但其是独立的,没有依赖项。 + 将 Flink 作为库使用时,使用该文件系统需要添加相应的 Maven 依赖项(`org.apache.flink:flink-swift-fs-hadoop:{{ site.version }}`)。 - - **OpenStack Swift FS** is supported by `flink-swift-fs-hadoop` and registered under the *swift://* URI scheme. - The implementation is based on the [Hadoop Project](https://hadoop.apache.org/) but is self-contained with no dependency footprint. - To use it when using Flink as a library, add the respective maven dependency (`org.apache.flink:flink-swift-fs-hadoop:{{ site.version }}`). - - - **[Aliyun Object Storage Service](./oss.html)** is supported by `flink-oss-fs-hadoop` and registered under the *oss://* URI scheme. - The implementation is based on the [Hadoop Project](https://hadoop.apache.org/) but is self-contained with no dependency footprint. + - **[阿里云对象存储](./oss.html)**由 `flink-oss-fs-hadoop` 支持,并通过 *oss://* URI scheme 使用。该实现基于 [Hadoop Project](https://hadoop.apache.org/),但其是独立的,没有依赖项。 - - **[Azure Blob Storage](./azure.html)** is supported by `flink-azure-fs-hadoop` and registered under the *wasb(s)://* URI schemes. - The implementation is based on the [Hadoop Project](https://hadoop.apache.org/) but is self-contained with no dependency footprint. + - **[Azure Blob Storage](./azure.html)** 由`flink-azure-fs-hadoop` 支持,并通过 *wasb(s)://* URI scheme 使用。该实现基于 [Hadoop Project](https://hadoop.apache.org/),但其是独立的,没有依赖项。 -Except **MapR FS**, you can and should use any of them as [plugins](../plugins.html). +除 **MapR FS** 之外,上述文件系统可以并且需要作为[插件](../plugins.html)使用。 -To use a pluggable file systems, copy the corresponding JAR file from the `opt` directory to a directory under `plugins` directory -of your Flink distribution before starting Flink, e.g. +使用外部文件系统时,在启动 Flink 之前需将对应的 JAR 文件从 `opt` 目录复制到 Flink 发行版 `plugin` 目录下的某一文件夹中,例如: {% highlight bash %} mkdir ./plugins/s3-fs-hadoop cp ./opt/flink-s3-fs-hadoop-{{ site.version }}.jar ./plugins/s3-fs-hadoop/ {% endhighlight %} -<span class="label label-danger">Attention</span> The [plugin](../plugins.html) mechanism for file systems was introduced in Flink version `1.9` to -support dedicated Java class loaders per plugin and to move away from the class shading mechanism. -You can still use the provided file systems (or your own implementations) via the old mechanism by copying the corresponding -JAR file into `lib` directory. However, **since 1.10, s3 plugins must be loaded through the plugin mechanism**; the old -way no longer works as these plugins are not shaded anymore (or more specifically the classes are not relocated since 1.10). +<span class="label label-danger">注意</span> 文件系统的[插件](../plugins.html)机制在 Flink 版本 1.9 中引入,以支持每个插件专有 Java 类加载器,并避免类隐藏机制。您仍然可以通过旧机制使用文件系统,即将对应的 JAR 文件复制到 `lib` 目录中,或使用您自己的实现方式,但是从版本 1.10 开始,**S3 插件必须通过插件机制加载**,因为这些插件不再被隐藏(版本 1.10 之后类不再被重定位),旧机制不再可用。 -It's encouraged to use the [plugins](../plugins.html)-based loading mechanism for file systems that support it. Loading file systems components from the `lib` -directory will not supported in future Flink versions. +尽可能通过基于[插件](../plugins.html)的加载机制使用支持的文件系统。未来的 Flink 版本将不再支持通过 `lib` 目录加载文件系统组件。 -## Adding a new pluggable File System implementation +## 添加新的外部文件系统实现 -File systems are represented via the `org.apache.flink.core.fs.FileSystem` class, which captures the ways to access and modify files and objects in that file system. +文件系统由类 `org.apache.flink.core.fs.FileSystem` 表示,该类定义了访问与修改文件系统中文件与对象的方法。 -To add a new file system: +要添加一个新的文件系统: Review comment: 添加一个新的文件系统: ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services