caican00 commented on code in PR #6328:
URL: https://github.com/apache/gravitino/pull/6328#discussion_r1921690714


##########
docs/spark-connector/spark-catalog-paimon.md:
##########
@@ -0,0 +1,89 @@
+---
+title: "Spark connector Paimon catalog"
+slug: /spark-connector/spark-catalog-paimon
+keyword: spark connector paimon catalog
+license: "This software is licensed under the Apache License version 2."
+---
+
+The Apache Gravitino Spark connector offers the capability to read and write 
Paimon tables, with the metadata managed by the Gravitino server. To enable the 
use of the Paimon catalog within the Spark connector now, you must set download 
[Paimon Spark runtime 
jar](https://paimon.apache.org/docs/0.8/spark/quick-start/#preparation) to 
Spark classpath.
+
+## Capabilities
+
+### Paimon Catalog Backend Support 
+- Only supports Paimon FilesystemCatalog on HDFS now.
+
+### Support DDL and DML operations:
+#### Namespace Support
+- `CREATE NAMESPACE`
+- `DROP NAMESPACE`
+- `LIST NAMESPACE`
+- `LOAD NAMESPACE`
+  - It can not return any user-specified configs now, as we only support 
FilesystemCatalog in spark-connector now.
+
+#### Namespace Not Support
+- `ALTER NAMESPACE`
+  - Paimon does not support alter namespace.
+
+#### Table DDL and DML Support
+- `CREATE TABLE`
+  - Doesn't support distribution and sort orders.
+- `DROP TABLE`
+- `ALTER TABLE`
+- `LIST TABLE`
+- `DESRICE TABLE`
+- `SELECT`
+- `INSERT INTO & OVERWRITE`
+- `Schema Evolution`
+- `PARTITION MANAGEMENT`, such as `LIST PARTITIONS`, `ALTER TABLE ... DROP 
PARTITION ...`
+
+#### Table DML Not Supported
+- Row Level operations, such as `MERGE INTO`, `DELETE`, `UPDATE`, `TRUNCATE`
+- Metadata tables, such as 
`{paimon_catalog}.{paimon_database}.{paimon_table}$snapshots`
+- Other Paimon extension SQLs, such as `Tag`
+- Call Statements
+- View
+- Time Travel
+- Hive and Jdbc backend, and Object Storage for FilesystemCatalog
+
+## SQL example
+
+```sql
+-- Suppose paimon_catalog is the Paimon catalog name managed by Gravitino
+USE paimon_catalog;
+
+CREATE DATABASE IF NOT EXISTS mydatabase;
+USE mydatabase;
+
+CREATE TABLE IF NOT EXISTS employee (
+  id bigint,
+  name string,
+  department string,
+  hire_date timestamp
+) PARTITIONED BY (name);
+
+SHOW TABLES;
+DESC TABLE EXTENDED employee;
+
+INSERT INTO employee
+VALUES
+(1, 'Alice', 'Engineering', TIMESTAMP '2021-01-01 09:00:00'),
+(2, 'Bob', 'Marketing', TIMESTAMP '2021-02-01 10:30:00'),
+(3, 'Charlie', 'Sales', TIMESTAMP '2021-03-01 08:45:00');
+
+SELECT * FROM employee WHERE name = 'Alice';
+
+SHOW PARTITIONS employee;
+ALTER TABLE employee DROP PARTITION (`name`='Alice');
+```
+
+## Catalog properties
+
+Gravitino spark connector will transform below property names which are 
defined in catalog properties to Spark Paimon connector configuration.
+
+| Gravitino catalog property name | Spark Paimon connector configuration | 
Description                                                                     
                                                                                
                                                    | Since Version    |
+|---------------------------------|--------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|
+| `catalog-backend`               | `metastore`                          | 
Catalog backend type                                                            
                                                                                
                                                    | 0.6.0            |

Review Comment:
   Should we update the version to 0.8?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@gravitino.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to