This is an automated email from the ASF dual-hosted git repository.

kxiao pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git

commit 398ad188ac529355e25bbc4b1f4f15e53e86eea8
Author: zhangdong <493738...@qq.com>
AuthorDate: Sun Jul 23 11:24:40 2023 +0800

    [doc](catalog)paimon doc (#21966)
    
    code pr: #21910
---
 docs/en/docs/lakehouse/multi-catalog/paimon.md    | 75 ++++++++++++++++++-----
 docs/zh-CN/docs/lakehouse/multi-catalog/paimon.md | 75 ++++++++++++++++++-----
 2 files changed, 120 insertions(+), 30 deletions(-)

diff --git a/docs/en/docs/lakehouse/multi-catalog/paimon.md 
b/docs/en/docs/lakehouse/multi-catalog/paimon.md
index cd9253288f..79e5b76681 100644
--- a/docs/en/docs/lakehouse/multi-catalog/paimon.md
+++ b/docs/en/docs/lakehouse/multi-catalog/paimon.md
@@ -30,31 +30,76 @@ under the License.
 <version since="dev">
 </version>
 
-## Usage
+## Instructions for use
 
-1. Currently, Doris only supports simple field types.
-2. Doris only supports Hive Metastore Catalogs currently. The usage is 
basically the same as that of Hive Catalogs. More types of Catalogs will be 
supported in future versions.
+1. When data in hdfs,need to put core-site.xml, hdfs-site.xml and 
hive-site.xml in the conf directory of FE and BE. First read the hadoop 
configuration file in the conf directory, and then read the related to the 
environment variable `HADOOP_CONF_DIR` configuration file.
+2. The currently adapted version of the payment is 0.4.0
 
 ## Create Catalog
 
-### Create Catalog Based on Paimon API
+Paimon Catalog Currently supports two types of Metastore creation catalogs:
+* filesystem(default),Store both metadata and data in the file system.
+* hive metastore,It also stores metadata in Hive metastore. Users can access 
these tables directly from Hive.
 
-Use the Paimon API to access metadata.Currently, only support Hive service as 
Paimon's Catalog.
+### Creating a Catalog Based on FileSystem
 
-- Hive Metastore
+#### HDFS
+```sql
+CREATE CATALOG `paimon_hdfs` PROPERTIES (
+    "type" = "paimon",
+    "warehouse" = "hdfs://HDFS8000871/user/paimon",
+    "dfs.nameservices"="HDFS8000871",
+    "dfs.ha.namenodes.HDFS8000871"="nn1,nn2",
+    "dfs.namenode.rpc-address.HDFS8000871.nn1"="172.21.0.1:4007",
+    "dfs.namenode.rpc-address.HDFS8000871.nn2"="172.21.0.2:4007",
+    
"dfs.client.failover.proxy.provider.HDFS8000871"="org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
+    "hadoop.username"="hadoop"
+);
+
+```
+
+#### S3
 
 ```sql
-CREATE CATALOG `paimon` PROPERTIES (
+CREATE CATALOG `paimon_s3` PROPERTIES (
     "type" = "paimon",
-    "hive.metastore.uris" = "thrift://172.16.65.15:7004",
-    "dfs.ha.namenodes.HDFS1006531" = "nn2,nn1",
-    "dfs.namenode.rpc-address.HDFS1006531.nn2" = "172.16.65.115:4007",
-    "dfs.namenode.rpc-address.HDFS1006531.nn1" = "172.16.65.15:4007",
-    "dfs.nameservices" = "HDFS1006531",
-    "hadoop.username" = "hadoop",
-    "warehouse" = "hdfs://HDFS1006531/data/paimon",
-    "dfs.client.failover.proxy.provider.HDFS1006531" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
+    "warehouse" = 
"s3://paimon-1308700295.cos.ap-beijing.myqcloud.com/paimoncos",
+    "s3.endpoint"="cos.ap-beijing.myqcloud.com",
+    "s3.access_key"="ak",
+    "s3.secret_key"="sk"
 );
+
+```
+
+#### OSS
+
+```sql
+CREATE CATALOG `paimon_oss` PROPERTIES (
+    "type" = "paimon",
+    "warehouse" = "oss://paimon-zd/paimonoss",
+    "oss.endpoint"="oss-cn-beijing.aliyuncs.com",
+    "oss.access_key"="ak",
+    "oss.secret_key"="sk"
+);
+
+```
+
+### Creating a Catalog Based on Hive Metastore
+
+```sql
+CREATE CATALOG `paimon_hms` PROPERTIES (
+    "type" = "paimon",
+    "paimon.catalog.type"="hms",
+    "warehouse" = "hdfs://HDFS8000871/user/zhangdong/paimon2",
+    "hive.metastore.uris" = "thrift://172.21.0.44:7004",
+    "dfs.nameservices'='HDFS8000871",
+    "dfs.ha.namenodes.HDFS8000871'='nn1,nn2",
+    "dfs.namenode.rpc-address.HDFS8000871.nn1"="172.21.0.1:4007",
+    "dfs.namenode.rpc-address.HDFS8000871.nn2"="172.21.0.2:4007",
+    
"dfs.client.failover.proxy.provider.HDFS8000871"="org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
+    "hadoop.username"="hadoop"
+);
+
 ```
 
 ## Column Type Mapping
diff --git a/docs/zh-CN/docs/lakehouse/multi-catalog/paimon.md 
b/docs/zh-CN/docs/lakehouse/multi-catalog/paimon.md
index 0ed5a12caa..7a14c879ae 100644
--- a/docs/zh-CN/docs/lakehouse/multi-catalog/paimon.md
+++ b/docs/zh-CN/docs/lakehouse/multi-catalog/paimon.md
@@ -30,31 +30,76 @@ under the License.
 <version since="dev">
 </version>
 
-## 使用限制
+## 使用须知
 
-1. 目前只支持简单字段类型。
-2. 目前仅支持 Hive Metastore 类型的 Catalog。所以使用方式和 Hive Catalog 基本一致。后续版本将支持其他类型的 
Catalog。
+1. 数据放在hdfs时,需要将 core-site.xml,hdfs-site.xml 和 hive-site.xml  放到 FE 和 BE 的 
conf 目录下。优先读取 conf 目录下的 hadoop 配置文件,再读取环境变量 `HADOOP_CONF_DIR` 的相关配置文件。
+2. 当前适配的paimon版本为0.4.0
 
 ## 创建 Catalog
 
-### 基于Paimon API创建Catalog
+Paimon Catalog 当前支持两种类型的Metastore创建Catalog:
+* filesystem(默认),同时存储元数据和数据在filesystem。
+* hive metastore,它还将元数据存储在Hive metastore中。用户可以直接从Hive访问这些表。
 
-使用Paimon API访问元数据的方式,目前只支持Hive服务作为Paimon的Catalog。
+### 基于FileSystem创建Catalog
 
-- Hive Metastore 作为元数据服务
+#### HDFS
+```sql
+CREATE CATALOG `paimon_hdfs` PROPERTIES (
+    "type" = "paimon",
+    "warehouse" = "hdfs://HDFS8000871/user/paimon",
+    "dfs.nameservices"="HDFS8000871",
+    "dfs.ha.namenodes.HDFS8000871"="nn1,nn2",
+    "dfs.namenode.rpc-address.HDFS8000871.nn1"="172.21.0.1:4007",
+    "dfs.namenode.rpc-address.HDFS8000871.nn2"="172.21.0.2:4007",
+    
"dfs.client.failover.proxy.provider.HDFS8000871"="org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
+    "hadoop.username"="hadoop"
+);
+
+```
+
+#### S3
 
 ```sql
-CREATE CATALOG `paimon` PROPERTIES (
+CREATE CATALOG `paimon_s3` PROPERTIES (
     "type" = "paimon",
-    "hive.metastore.uris" = "thrift://172.16.65.15:7004",
-    "dfs.ha.namenodes.HDFS1006531" = "nn2,nn1",
-    "dfs.namenode.rpc-address.HDFS1006531.nn2" = "172.16.65.115:4007",
-    "dfs.namenode.rpc-address.HDFS1006531.nn1" = "172.16.65.15:4007",
-    "dfs.nameservices" = "HDFS1006531",
-    "hadoop.username" = "hadoop",
-    "warehouse" = "hdfs://HDFS1006531/data/paimon",
-    "dfs.client.failover.proxy.provider.HDFS1006531" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
+    "warehouse" = 
"s3://paimon-1308700295.cos.ap-beijing.myqcloud.com/paimoncos",
+    "s3.endpoint"="cos.ap-beijing.myqcloud.com",
+    "s3.access_key"="ak",
+    "s3.secret_key"="sk"
 );
+
+```
+
+#### OSS
+
+```sql
+CREATE CATALOG `paimon_oss` PROPERTIES (
+    "type" = "paimon",
+    "warehouse" = "oss://paimon-zd/paimonoss",
+    "oss.endpoint"="oss-cn-beijing.aliyuncs.com",
+    "oss.access_key"="ak",
+    "oss.secret_key"="sk"
+);
+
+```
+
+### 基于Hive Metastore创建Catalog
+
+```sql
+CREATE CATALOG `paimon_hms` PROPERTIES (
+    "type" = "paimon",
+    "paimon.catalog.type"="hms",
+    "warehouse" = "hdfs://HDFS8000871/user/zhangdong/paimon2",
+    "hive.metastore.uris" = "thrift://172.21.0.44:7004",
+    "dfs.nameservices'='HDFS8000871",
+    "dfs.ha.namenodes.HDFS8000871'='nn1,nn2",
+    "dfs.namenode.rpc-address.HDFS8000871.nn1"="172.21.0.1:4007",
+    "dfs.namenode.rpc-address.HDFS8000871.nn2"="172.21.0.2:4007",
+    
"dfs.client.failover.proxy.provider.HDFS8000871"="org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
+    "hadoop.username"="hadoop"
+);
+
 ```
 
 ## 列类型映射


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to