This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new f6966d079be [opt](catalog) opt catalog doc (#1775)
f6966d079be is described below

commit f6966d079beeb8e12ddcfde71ae68a33a6234f0a
Author: Mingyu Chen (Rayner) <morning...@163.com>
AuthorDate: Mon Jan 13 19:18:03 2025 +0800

    [opt](catalog) opt catalog doc (#1775)
    
    ## Versions
    
    - [x] dev
    - [ ] 3.0
    - [ ] 2.1
    - [ ] 2.0
    
    ## Languages
    
    - [x] Chinese
    - [x] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
---
 docs/lakehouse/catalogs/hive-catalog.md            | 34 +++++++++---------
 docs/lakehouse/catalogs/hudi-catalog.md            |  4 +--
 docs/lakehouse/catalogs/iceberg-catalog.md         | 20 +++++------
 docs/lakehouse/catalogs/paimon-catalog.md          |  2 +-
 docs/lakehouse/file-formats/orc.md                 | 16 ++++++++-
 docs/lakehouse/file-formats/parquet.md             | 18 +++++++++-
 docs/lakehouse/file-formats/text.md                | 41 +++++++++++++++++++++-
 .../current/lakehouse/catalogs/hive-catalog.md     | 16 ++++-----
 .../current/lakehouse/catalogs/hudi-catalog.md     |  2 +-
 .../current/lakehouse/catalogs/iceberg-catalog.md  | 16 ++++-----
 .../current/lakehouse/catalogs/paimon-catalog.md   |  2 +-
 11 files changed, 120 insertions(+), 51 deletions(-)

diff --git a/docs/lakehouse/catalogs/hive-catalog.md 
b/docs/lakehouse/catalogs/hive-catalog.md
index 98938f3fcd3..78a000b9280 100644
--- a/docs/lakehouse/catalogs/hive-catalog.md
+++ b/docs/lakehouse/catalogs/hive-catalog.md
@@ -94,7 +94,7 @@ Hive transactional tables are supported from version 3.x 
onwards. For details, r
 
 * [HDFS](../storages/hdfs.md)
 * [AWS S3](../storages/s3.md)
-* [Google Cloud Storage](../storages/google-cloud-storage.md)
+* [Google Cloud Storage](../storages/gcs.md)
 * [Alibaba Cloud OSS](../storages/aliyun-oss.md)
 * [Tencent Cloud COS](../storages/tencent-cos.md)
 * [Huawei Cloud OBS](../storages/huawei-obs.md)
@@ -404,7 +404,7 @@ For a Hive Database, you must first delete all tables under 
that Database before
 
 ### Creating and Dropping Tables
 
-- Creating Tables
+- **Creating Tables**
 
   Doris supports creating both partitioned and non-partitioned tables in Hive.
 
@@ -476,11 +476,11 @@ For a Hive Database, you must first delete all tables 
under that Database before
   ```
   :::
 
-- Dropping Tables
+- **Dropping Tables**
 
   You can delete a Hive table using the `DROP TABLE` statement. When a table 
is deleted, all data, including partition data, is also removed.
 
-- Column Type Mapping
+- **Column Type Mapping**
 
   Refer to the [Column Type Mapping] section for details. Note the following 
restrictions:
 
@@ -488,15 +488,15 @@ For a Hive Database, you must first delete all tables 
under that Database before
   - Hive 3.0 supports setting default values. To set default values, 
explicitly add `"hive.version" = "3.0.0"` in the catalog properties.
   - If inserted data types are incompatible (e.g., inserting `'abc'` into a 
numeric type), the value will be converted to `null`.
 
-- Partitioning
+- **Partitioning**
 
   In Hive, partition types correspond to List partitions in Doris. Therefore, 
when creating a Hive partitioned table in Doris, use the List partition syntax, 
but there is no need to explicitly enumerate each partition. Doris will 
automatically create the corresponding Hive partition based on data values 
during data insertion. Single-column or multi-column partitioned tables are 
supported.
 
-- File Formats
+- **File Formats**
 
-  - **ORC** (default)
-  - **Parquet**
-  - **Text** (supported from versions 2.1.7 and 3.0.3)
+  - ORC (default)
+  - Parquet
+  - Text (supported from versions 2.1.7 and 3.0.3)
 
       Text format supports the following table properties:
 
@@ -507,16 +507,16 @@ For a Hive Database, you must first delete all tables 
under that Database before
                  - `serialization.null.format`: Format for storing `NULL` 
values. Default is `\N`.
                  - `escape.delim`: Escape character. Default is `\`.
 
-- Compression Formats
+- **Compression Formats**
 
-  - **Parquet**: snappy (default), zstd, plain (no compression)
-  - **ORC**: snappy, zlib (default), zstd, plain (no compression)
-  - **Text**: gzip, deflate, bzip2, zstd, lz4, lzo, snappy, plain (default, no 
compression)
+  - Parquet: snappy (default), zstd, plain (no compression)
+  - ORC: snappy, zlib (default), zstd, plain (no compression)
+  - Text: gzip, deflate, bzip2, zstd, lz4, lzo, snappy, plain (default, no 
compression)
 
-- Storage Medium
+- **Storage Medium**
 
-  - **HDFS**
-  - **Object Storage**
+  - HDFS
+  - Object Storage
 
 ## Subscribing to Hive Metastore Events
 
@@ -681,4 +681,4 @@ Here are examples of file operations under various 
scenarios:
 | Doris Version | Feature Support                              |
 | ------------- | --------------------------------------------- |
 | 2.1.6         | Support for writing back to Hive tables       |
-| 3.0.4         | Support for Hive tables in JsonSerDe format. Support for 
transactional tables in Hive4. |
\ No newline at end of file
+| 3.0.4         | Support for Hive tables in JsonSerDe format. Support for 
transactional tables in Hive4. |
diff --git a/docs/lakehouse/catalogs/hudi-catalog.md 
b/docs/lakehouse/catalogs/hudi-catalog.md
index 7ad02eb0573..c5f93b75852 100644
--- a/docs/lakehouse/catalogs/hudi-catalog.md
+++ b/docs/lakehouse/catalogs/hudi-catalog.md
@@ -86,7 +86,7 @@ The current dependent Hudi version is 0.15. It is recommended 
to access Hudi dat
 
 * [HDFS](../storages/hdfs.md)
 * [AWS S3](../storages/s3.md)
-* [Google Cloud Storage](../storages/google-cloud-storage.md)
+* [Google Cloud Storage](../storages/gcs.md)
 * [Alibaba Cloud OSS](../storages/aliyun-oss.md)
 * [Tencent Cloud COS](../storages/tencent-cos.md)
 * [Huawei Cloud OBS](../storages/huawei-obs.md)
@@ -226,4 +226,4 @@ By using `desc` to view the execution plan, you can see 
that Doris converts `@in
 
 | Doris Version | Feature Support                               |
 | ------------- | ---------------------------------------------- |
-| 2.1.8/3.0.4   | Hudi dependency upgraded to 0.15. Added Hadoop Hudi JNI 
Scanner. |
\ No newline at end of file
+| 2.1.8/3.0.4   | Hudi dependency upgraded to 0.15. Added Hadoop Hudi JNI 
Scanner. |
diff --git a/docs/lakehouse/catalogs/iceberg-catalog.md 
b/docs/lakehouse/catalogs/iceberg-catalog.md
index ae651c8b539..2d3feeba438 100644
--- a/docs/lakehouse/catalogs/iceberg-catalog.md
+++ b/docs/lakehouse/catalogs/iceberg-catalog.md
@@ -106,7 +106,7 @@ The current Iceberg dependency is version 1.4.3, which is 
compatible with higher
 
 * [HDFS](../storages/hdfs.md)
 * [AWS S3](../storages/s3.md)
-* [Google Cloud Storage](../storages/google-cloud-storage.md)
+* [Google Cloud Storage](../storages/gcs.md)
 * [Aliyun OSS](../storages/aliyun-oss.md)
 * [Tencent COS](../storages/tencent-cos.md)
 * [Huawei OBS](../storages/huawei-obs.md)
@@ -406,12 +406,12 @@ DROP DATABASE [IF EXISTS] iceberg.iceberg_db;
 ```
 
 :::caution
-For an Iceberg Database, you must first delete all tables under the database 
before you can delete the database itself; otherwise, an error will occur.
+For an Iceberg Database, you must first drop all tables under the database 
before you can drop the database itself; otherwise, an error will occur.
 :::
 
 ### Creating and Dropping Tables
 
-* Creating Tables
+* **Creating Tables**
 
   Doris supports creating both partitioned and non-partitioned tables in 
Iceberg.
 
@@ -459,7 +459,7 @@ For an Iceberg Database, you must first delete all tables 
under the database bef
 
   After creation, you can use the `SHOW CREATE TABLE` command to view the 
Iceberg table creation statement. For details about partition functions, see 
the [Partitioning](#) section.
 
-* Dropping Tables
+* **Dropping Tables**
 
   You can drop an Iceberg table using the `DROP TABLE` statement. Dropping a 
table will also remove its data, including partition data.
 
@@ -469,11 +469,11 @@ For an Iceberg Database, you must first delete all tables 
under the database bef
   DROP TABLE [IF EXISTS] iceberg_tbl;
   ```
 
-* Column Type Mapping
+* **Column Type Mapping**
 
   Refer to the [Column Type Mapping](#) section.
 
-* Partitioning
+* **Partitioning**
 
   Partition types in Iceberg correspond to List partitions in Doris. 
Therefore, when creating an Iceberg partitioned table in Doris, you should use 
the List partitioning syntax, but you don't need to explicitly enumerate each 
partition. Doris will automatically create the corresponding Iceberg partitions 
based on the data values during data insertion.
 
@@ -493,19 +493,19 @@ For an Iceberg Database, you must first delete all tables 
under the database bef
 
     * `truncate(L, col)`
 
-* File Formats
+* **File Formats**
 
   * Parquet (default)
 
   * ORC
 
-* Compression Formats
+* **Compression Formats**
 
   * Parquet: snappy, zstd (default), plain (no compression).
 
   * ORC: snappy, zlib (default), zstd, plain (no compression).
 
-* Storage Medium
+* **Storage Medium**
 
   * HDFS
 
@@ -518,4 +518,4 @@ For an Iceberg Database, you must first delete all tables 
under the database bef
 | Doris Version | Feature Support                        |
 | -------------- | -------------------------------------- |
 | 2.1.3          | Support for ORC file format, Equality Delete |
-| 2.1.6          | Support for DDL, DML                   |
\ No newline at end of file
+| 2.1.6          | Support for DDL, DML                   |
diff --git a/docs/lakehouse/catalogs/paimon-catalog.md 
b/docs/lakehouse/catalogs/paimon-catalog.md
index 3b68b1ff892..29e53c93a16 100644
--- a/docs/lakehouse/catalogs/paimon-catalog.md
+++ b/docs/lakehouse/catalogs/paimon-catalog.md
@@ -101,7 +101,7 @@ The currently dependent Paimon version is 0.8.1. Higher 
versions of Paimon table
 
 * [AWS S3](../storages/s3.md)
 
-* [Google Cloud Storage](../storages/google-cloud-storage.md)
+* [Google Cloud Storage](../storages/gcs.md)
 
 * [Alibaba Cloud OSS](../storages/aliyun-oss.md)
 
diff --git a/docs/lakehouse/file-formats/orc.md 
b/docs/lakehouse/file-formats/orc.md
index 4f6433224a7..169bd51f0a5 100644
--- a/docs/lakehouse/file-formats/orc.md
+++ b/docs/lakehouse/file-formats/orc.md
@@ -24,5 +24,19 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-The document is under development, please refer to versioned doc 2.1 or 3.0
+This document introduces the support for reading and writing ORC file formats 
in Doris. It applies to the following functionalities:
 
+* Reading and writing data in the Catalog.
+* Reading data using Table Valued Functions.
+* Reading data with Broker Load.
+* Writing data during Export.
+* Writing data with Outfile.
+
+## Supported Compression Formats
+
+* uncompressed
+* snappy
+* lz4
+* zstd
+* lzo
+* zlib
\ No newline at end of file
diff --git a/docs/lakehouse/file-formats/parquet.md 
b/docs/lakehouse/file-formats/parquet.md
index 020051b03f9..370184376c2 100644
--- a/docs/lakehouse/file-formats/parquet.md
+++ b/docs/lakehouse/file-formats/parquet.md
@@ -24,5 +24,21 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-The document is under development, please refer to versioned doc 2.1 or 3.0
+This document introduces the support for reading and writing Parquet file 
formats in Doris. It applies to the following features:
+
+* Reading and writing data in the Catalog.
+* Reading data using Table Valued Functions.
+* Reading data with Broker Load.
+* Writing data during Export.
+* Writing data with Outfile.
+
+## Supported Compression Formats
+
+* uncompressed
+* snappy
+* lz4
+* zstd
+* gzip
+* lzo
+* brotli
 
diff --git a/docs/lakehouse/file-formats/text.md 
b/docs/lakehouse/file-formats/text.md
index 07da713c918..e798a6d8423 100644
--- a/docs/lakehouse/file-formats/text.md
+++ b/docs/lakehouse/file-formats/text.md
@@ -24,5 +24,44 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-The document is under development, please refer to versioned doc 2.1 or 3.0
+This document introduces the support for reading and writing text file formats 
in Doris.
+
+## Text/CSV
+
+* Catalog
+
+  Supports reading Hive tables in the 
`org.apache.hadoop.mapred.TextInputFormat` format.
+
+  Supports reading Hive tables in the 
`org.apache.hadoop.hive.serde2.OpenCSVSerde` format. (Supported from version 
2.1.7)
+
+* Table Valued Function
+
+* Import
+
+  Import functionality supports Text/CSV formats. See the import documentation 
for details.
+
+* Export
+
+  Export functionality supports Text/CSV formats. See the export documentation 
for details.
+
+### Supported Compression Formats
+
+* uncompressed
+* gzip
+* deflate
+* bzip2
+* zstd
+* lz4
+* snappy
+* lzo
+
+## JSON
+
+* Catalog
+
+  Supports reading Hive tables in the 
`org.apache.hive.hcatalog.data.JsonSerDe` format. (Supported from version 3.0.4)
+
+* Import
+
+  Import functionality supports JSON formats. See the import documentation for 
details.
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hive-catalog.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hive-catalog.md
index c035d30e1a3..cdeca985411 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hive-catalog.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hive-catalog.md
@@ -100,7 +100,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
 
 * [ AWS S3](../storages/s3.md)
 
-* [ Google Cloud Storage](../storages/google-cloud-storage.md)
+* [ Google Cloud Storage](../storages/gcs.md)
 
 * [ 阿里云 OSS](../storages/aliyun-oss.md)
 
@@ -412,7 +412,7 @@ DROP DATABASE [IF EXISTS] hive_ctl.hive_db;
 
 ### 创建和删除表
 
-* 创建
+* **创建**
 
   Doris 支持在 Hive 中创建分区或非分区表。
 
@@ -484,11 +484,11 @@ DROP DATABASE [IF EXISTS] hive_ctl.hive_db;
   ```
   :::
 
-* 删除
+* **删除**
 
   可以通过 `DROP TABLE` 语句删除一个 Hive 表。当前删除表后,会同时删除数据,包括分区数据。
 
-* 列类型映射
+* **列类型映射**
 
   参考【列类型映射】部分。需要额外注意一下限制:
 
@@ -496,11 +496,11 @@ DROP DATABASE [IF EXISTS] hive_ctl.hive_db;
   - Hive 3.0 支持设置默认值。如果需要设置默认值,则需要在 Catalog 属性中显示的添加 `"hive.version" = 
"3.0.0"`。
   - 插入数据后,如果类型不能够兼容,例如 `'abc'` 插入到数值类型,则会转为 `null` 值插入。
 
-* 分区
+* **分区**
 
   Hive 中的分区类型对应 Doris 中的 List 分区。因此,在 Doris 中 创建 Hive 分区表,需使用 List 
分区的建表语句,但无需显式的枚举各个分区。在写入数据时,Doris 会根据数据的值,自动创建对应的 Hive 分区。支持创建单列或多列分区表。
 
-* 文件格式
+* **文件格式**
 
   * ORC(默认)
 
@@ -522,7 +522,7 @@ DROP DATABASE [IF EXISTS] hive_ctl.hive_db;
 
       * `escape.delim`:转移字符。默认 `\`。
 
-* 压缩格式
+* **压缩格式**
 
   * Parquet:snappy(默认)、zstd、plain。(Plain 就是不采用压缩)
 
@@ -530,7 +530,7 @@ DROP DATABASE [IF EXISTS] hive_ctl.hive_db;
 
   * Text:gzipm、defalte、bzip2、zstd、lz4、lzo、snappy、plain(默认)。(Plain 就是不采用压缩)
 
-* 存储介质
+* **存储介质**
 
   * HDFS
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hudi-catalog.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hudi-catalog.md
index 89b33ec2738..7526ce04b67 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hudi-catalog.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hudi-catalog.md
@@ -88,7 +88,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
 
 * [ AWS S3](../storages/s3.md)
 
-* [ Google Cloud Storage](../storages/google-cloud-storage.md)
+* [ Google Cloud Storage](../storages/gcs.md)
 
 * [ 阿里云 OSS](../storages/aliyun-oss.md)
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/iceberg-catalog.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/iceberg-catalog.md
index 0e34031bae4..90ddcb14b0e 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/iceberg-catalog.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/iceberg-catalog.md
@@ -113,7 +113,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
 
 * [ AWS S3](../storages/s3.md)
 
-* [ Google Cloud Storage](../storages/google-cloud-storage.md)
+* [ Google Cloud Storage](../storages/gcs.md)
 
 * [ 阿里云 OSS](../storages/aliyun-oss.md)
 
@@ -421,7 +421,7 @@ DROP DATABASE [IF EXISTS] iceberg.iceberg_db;
 
 ### 创建和删除表
 
-* 创建
+* **创建**
 
   Doris 支持在 Iceberg 中创建分区或非分区表。
 
@@ -469,7 +469,7 @@ DROP DATABASE [IF EXISTS] iceberg.iceberg_db;
 
   创建后,可以通过 `SHOW CREATE TABLE` 命令查看 Iceberg 的建表语句。关于分区表的分区函数,可以参阅后面的【分区】小节。
 
-* 删除
+* **删除**
 
   可以通过 `DROP TABLE` 语句删除一个 Iceberg 表。当前删除表后,会同时删除数据,包括分区数据。
 
@@ -479,11 +479,11 @@ DROP DATABASE [IF EXISTS] iceberg.iceberg_db;
   DROP TABLE [IF EXISTS] iceberg_tbl;
   ```
 
-* 列类型映射
+* **列类型映射**
 
   参考【列类型映射】部分。
 
-* 分区
+* **分区**
 
   Iceberg 中的分区类型对应 Doris 中的 List 分区。因此,在 Doris 中 创建 Iceberg 分区表,需使用 List 
分区的建表语句,但无需显式的枚举各个分区。在写入数据时,Doris 会根据数据的值,自动创建对应的 Iceberg 分区。
 
@@ -503,19 +503,19 @@ DROP DATABASE [IF EXISTS] iceberg.iceberg_db;
 
       * `truncate(L, col)`
 
-* 文件格式
+* **文件格式**
 
   * Parquet(默认)
 
   * ORC
 
-* 压缩格式
+* **压缩格式**
 
   * Parquet:snappy,zstd(默认),plain。(plain 就是不采用压缩)
 
   * ORC:snappy,zlib(默认),zstd,plain。(plain 就是不采用压缩)
 
-* 存储介质
+* **存储介质**
 
   * HDFS
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/paimon-catalog.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/paimon-catalog.md
index 5380c53fc8f..09813f1abd5 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/paimon-catalog.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/paimon-catalog.md
@@ -101,7 +101,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
 
 * [ AWS S3](../storages/s3.md)
 
-* [ Google Cloud Storage](../storages/google-cloud-storage.md)
+* [ Google Cloud Storage](../storages/gcs.md)
 
 * [ 阿里云 OSS](../storages/aliyun-oss.md)
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to