bowenli86 commented on a change in pull request #9163: [FLINK-13086]add Chinese 
documentation for catalogs
URL: https://github.com/apache/flink/pull/9163#discussion_r305536707
 
 

 ##########
 File path: docs/dev/table/catalog.zh.md
 ##########
 @@ -23,101 +23,102 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Catalogs provide metadata, such as names, schemas, statistics of tables, and 
information about how to access data stored in a database or other external 
systems. Once a catalog is registered within a `TableEnvironment`, all its 
meta-objects are accessible from the Table API and SQL queries.
-
+Catalog提供元数据,例如名称,模式,表的统计信息以及有关如何访问存储在数据库或其他外部系统中的数据。 
一旦目录在`TableEnvironment`中注册,就可以从Table API和SQL查询中访问其所有元对象。
 
 * This will be replaced by the TOC
 {:toc}
 
 
-Catalog Interface
+Catalog接口
 -----------------
 
-APIs are defined in `Catalog` interface. The interface defines a set of APIs 
to read and write catalog meta-objects such as database, tables, partitions, 
views, and functions.
+API在`Catalog`接口中被定义。 该接口定义了一组API,用于读取和写入目录元对象,如数据库,表,分区,视图和函数。
 
 
-Catalog Meta-Objects Naming Structure
+Catalog元对象命名结构
 -------------------------------------
 
-Flink's catalogs use a strict two-level structure, that is, catalogs contain 
databases, and databases contain meta-objects. Thus, the full name of a 
meta-object is always structured as `catalogName`.`databaseName`.`objectName`.
+Flink的catalog使用严格的两级结构,即catalog包含数据库,数据库包含元对象。 
因此,元对象的全名总是被构造为`catalogName`.`databaseName`.`objectName`。
+
 
-Each `TableEnvironment` has a `CatalogManager` to manager all registered 
catalogs. To ease access to meta-objects, `CatalogManager` has a concept of 
current catalog and current database. By setting current catalog and current 
database, users can use just the meta-object's name in their queries. This 
greatly simplifies user experience.
+每个`TableEnvironment`都有一个`CatalogManager`来管理所有已注册的catalog。 
为了便于访问元对象,`CatalogManager`具有当前目录和当前数据库的概念。 通过设置当前目录和当前数据库,用户可以在查询中仅使用元对象的名称。 
这极大地简化了用户体验。
 
-For example, a previous query as
+例如,以前的查询为
 
 ```sql
 select * from mycatalog.mydb.myTable;
 ```
 
-can be shortened to
+可以缩短为
 
 ```sql
 select * from myTable;
 ```
 
-To querying tables in a different database under the current catalog, users 
don't need to specify the catalog name. In our example, it would be
+要查询当前catalog下的其他数据库中的表,用户无需指定catalog名称。 在我们的例子中,它将是
 
 ```
 select * from mydb2.myTable2
 ```
 
-`CatalogManager` always has a built-in `GenericInMemoryCatalog` named 
`default_catalog`, which has a built-in default database named 
`default_database`. If no other catalog and database are explicitly set, they 
will be the current catalog and current database by default. All temp 
meta-objects, such as those defined by `TableEnvironment#registerTable`  are 
registered to this catalog. 
+`CatalogManager`总是有一个名为`default_catalog`的内置`GenericInMemoryCatalog`,它有一个名为`default_database`的内置默认数据库。
 如果没有显式设置其他目录和数据库,则默认情况下它们将会使用当前目录和当前数据库。 
所有临时元对象(例如由`TableEnvironment#registerTable`定义的对象)都将注册到此目录中。
+
 
-Users can set current catalog and database via 
`TableEnvironment.useCatalog(...)` and `TableEnvironment.useDatabase(...)` in 
Table API, or `USE CATALOG ...` and `USE DATABASE ...` in Flink SQL.
+在FLINK SQL中,用户可以通过Table 
API中的`TableEnvironment.useCatalog(...)`和`TableEnvironment.useDatabase(...)`,或`USE
 CATALOG ...`和`USE DATABASE ...`来设置当前目录和数据库。
 
 
-Catalog Types
+Catalog的类型
 -------------
 
 ## GenericInMemoryCatalog
 
-The default catalog; all meta-objects in this catalog are stored in memory, 
and be will be lost once the session shuts down.
+默认目录; 此目录中的所有元对象都存储在内存中,并且会话关闭后将丢失。
 
-Its config entry value in SQL CLI yaml file is "generic_in_memory".
+SQL CLI yaml文件中的配置条目为“generic_in_memory”。
 
 ## HiveCatalog
 
-Flink's `HiveCatalog` can read and write both Flink and Hive meta-objects 
using Hive Metastore as persistent storage.
+Flink的`HiveCatalog`可以使用Hive Metastore作为持久存储来读写Flink和Hive元对象。
 
-Its config entry value in SQL CLI yaml file is "hive".
+它在SQL CLI yaml文件中的配置条目是"hive"。
 
-### Persist Flink meta-objects
+### 持久化Flink的元对象
 
-Historically, Flink meta-objects are only stored in memory and are per session 
based. That means users have to recreate all the meta-objects every time they 
start a new session.
+从历史上看,Flink元对象是基于会话的,信息仅存储在内存中。 这意味着用户每次开始新会话时都必须重新创建所有元对象。
 
-To maintain meta-objects across sessions, users can choose to use 
`HiveCatalog` to persist all of users' Flink streaming (unbounded-stream) and 
batch (bounded-stream) meta-objects. Because Hive Metastore is only used for 
storage, Hive itself may not understand Flink's meta-objects stored in the 
metastore.
+为了跨会话维护元对象,用户可以选择使用`HiveCatalog`来持久保存所有用户的Flink流(无界流)和批量(有界流)元对象。 由于Hive 
Metastore仅用于存储,因此Hive本身可能无法理解存储在Metastore中的Flink的元对象。
 
-### Integrate Flink with Hive metadata
+### Flink与Hive元数据集成
 
-The ultimate goal for integrating Flink with Hive metadata is that:
+将Flink与Hive元数据集成的最终目标是:
 
-1. Existing meta-objects, like tables, views, and functions, created by Hive 
or other Hive-compatible applications can be used by Flink
+1. Flink可以使用由Hive或其他与Hive兼容的应用程序创建的现有元对象,如表,视图和函数
 
-2. Meta-objects created by `HiveCatalog` can be written back to Hive metastore 
such that Hive and other Hive-compatible applications can consume.
+2. 由HiveCatalog创建的元对象可以写回Hive Metastore,以便Hive和其他Hive兼容的应用程序可以使用。
 
-## User-configured Catalog
+## 用户配置的Catalog
 
-Catalogs are pluggable. Users can develop custom catalogs by implementing the 
`Catalog` interface, which defines a set of APIs for reading and writing 
catalog meta-objects such as database, tables, partitions, views, and functions.
+目录是可插拔的。 用户可以通过实现`Catalog`接口来开发自定义目录,该接口定义了一组用于读取和编写目录元对象(如数据库,表,分区,视图和函数)的API。
 
 
 HiveCatalog
 -----------
 
-## Supported Hive Versions
+## 支持的Hive版本
 
-Flink's `HiveCatalog` officially supports Hive 2.3.4 and 1.2.1.
+Flink的`HiveCatalog`正式支持Hive 2.3.4和1.2.1。
 
 Review comment:
   ```suggestion
   Flink的`HiveCatalog`官方支持Hive 2.3.4和1.2.1。
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to