FANNG1 commented on code in PR #5914:
URL: https://github.com/apache/gravitino/pull/5914#discussion_r1910097162


##########
flink-connector/flink/src/main/java/org/apache/gravitino/flink/connector/iceberg/IcebergPropertiesConverter.java:
##########
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.gravitino.flink.connector.iceberg;
+
+import com.google.common.base.Preconditions;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.flink.table.catalog.CommonCatalogOptions;
+import org.apache.gravitino.catalog.lakehouse.iceberg.IcebergConstants;
+import org.apache.gravitino.catalog.lakehouse.iceberg.IcebergPropertiesUtils;
+import org.apache.gravitino.flink.connector.PropertiesConverter;
+
+public class IcebergPropertiesConverter implements PropertiesConverter {
+  public static IcebergPropertiesConverter INSTANCE = new 
IcebergPropertiesConverter();
+
+  private IcebergPropertiesConverter() {}
+
+  private static final Map<String, String> GRAVITINO_CONFIG_TO_FLINK_ICEBERG;

Review Comment:
   why not initiate the map directly?



##########
docs/flink-connector/flink-catalog-iceberg.md:
##########
@@ -0,0 +1,76 @@
+---
+title: "Flink connector Iceberg catalog"
+slug: /flink-connector/flink-catalog-iceberg
+keyword: flink connector iceberg catalog
+license: "This software is licensed under the Apache License version 2."
+---
+
+The Apache Gravitino Flink connector can be used to read and write Iceberg 
tables, with the metadata managed by the Gravitino server.
+To enable the Flink connector, you must download the Iceberg Flink runtime JAR 
and place it in the Flink classpath.
+
+## Capabilities
+
+#### Supported DML and DDL operations:
+
+- `CREATE CATALOG`
+- `CREATE DATABASE`
+- `CREATE TABLE`
+- `DROP TABLE`
+- `ALTER TABLE`
+- `INSERT INTO & OVERWRITE`
+- `SELECT`
+
+#### Operations not supported:
+
+- Partition operations
+- View operations
+- Metadata tables, like:
+  - `{iceberg_catalog}.{iceberg_database}.{iceberg_table}&snapshots`
+- Querying UDF
+  - `UPDATE` clause

Review Comment:
   UPDATE & DELETE clause are not under `Query UDF`?
   



##########
flink-connector/flink/src/main/java/org/apache/gravitino/flink/connector/iceberg/IcebergPropertiesConverter.java:
##########
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.gravitino.flink.connector.iceberg;
+
+import com.google.common.base.Preconditions;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.flink.table.catalog.CommonCatalogOptions;
+import org.apache.gravitino.catalog.lakehouse.iceberg.IcebergConstants;
+import org.apache.gravitino.catalog.lakehouse.iceberg.IcebergPropertiesUtils;
+import org.apache.gravitino.flink.connector.PropertiesConverter;
+
+public class IcebergPropertiesConverter implements PropertiesConverter {
+  public static IcebergPropertiesConverter INSTANCE = new 
IcebergPropertiesConverter();
+
+  private IcebergPropertiesConverter() {}
+
+  private static final Map<String, String> GRAVITINO_CONFIG_TO_FLINK_ICEBERG;
+
+  static {
+    Map<String, String> map = new HashMap();
+    map.put(IcebergConstants.CATALOG_BACKEND, 
IcebergPropertiesConstants.ICEBERG_CATALOG_TYPE);
+    GRAVITINO_CONFIG_TO_FLINK_ICEBERG = Collections.unmodifiableMap(map);
+  }
+
+  @Override
+  public Map<String, String> toFlinkCatalogProperties(Map<String, String> 
gravitinoProperties) {
+    Preconditions.checkArgument(
+        gravitinoProperties != null, "Iceberg Catalog properties should not be 
null.");
+
+    Map<String, String> all = new HashMap<>();
+    if (gravitinoProperties != null) {
+      gravitinoProperties.forEach(
+          (k, v) -> {
+            if (k.startsWith(FLINK_PROPERTY_PREFIX)) {
+              String newKey = k.substring(FLINK_PROPERTY_PREFIX.length());
+              all.put(newKey, v);
+            }
+          });
+    }
+    Map<String, String> transformedProperties =
+        IcebergPropertiesUtils.toIcebergCatalogProperties(gravitinoProperties);
+
+    if (transformedProperties != null) {
+      all.putAll(transformedProperties);
+    }
+    all.put(
+        CommonCatalogOptions.CATALOG_TYPE.key(), 
GravitinoIcebergCatalogFactoryOptions.IDENTIFIER);
+    // Map "catalog-backend" to "catalog-type".
+    GRAVITINO_CONFIG_TO_FLINK_ICEBERG.forEach(

Review Comment:
   Gravitino supports custom catalog backend, should we do special logic here?



##########
flink-connector/flink/src/test/java/org/apache/gravitino/flink/connector/integration/test/iceberg/FlinkIcebergHiveCatalogIT.java:
##########
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.gravitino.flink.connector.integration.test.iceberg;
+
+import com.google.common.collect.Maps;
+import java.util.Map;
+import org.apache.gravitino.flink.connector.iceberg.IcebergPropertiesConstants;
+import org.junit.jupiter.api.Tag;
+
+@Tag("gravitino-docker-test")
+public class FlinkIcebergHiveCatalogIT extends FlinkIcebergCatalogIT {

Review Comment:
   Does it support other catalog backend like JDBC or REST?



##########
docs/flink-connector/flink-catalog-iceberg.md:
##########
@@ -0,0 +1,76 @@
+---
+title: "Flink connector Iceberg catalog"
+slug: /flink-connector/flink-catalog-iceberg
+keyword: flink connector iceberg catalog
+license: "This software is licensed under the Apache License version 2."
+---
+
+The Apache Gravitino Flink connector can be used to read and write Iceberg 
tables, with the metadata managed by the Gravitino server.
+To enable the Flink connector, you must download the Iceberg Flink runtime JAR 
and place it in the Flink classpath.
+
+## Capabilities
+
+#### Supported DML and DDL operations:
+
+- `CREATE CATALOG`
+- `CREATE DATABASE`
+- `CREATE TABLE`
+- `DROP TABLE`
+- `ALTER TABLE`
+- `INSERT INTO & OVERWRITE`
+- `SELECT`
+
+#### Operations not supported:
+
+- Partition operations
+- View operations
+- Metadata tables, like:
+  - `{iceberg_catalog}.{iceberg_database}.{iceberg_table}&snapshots`
+- Querying UDF
+  - `UPDATE` clause
+  - `DELETE` clause
+  - `CREATE TABLE LIKE` clause
+
+## SQL example
+```sql
+
+-- Suppose iceberg_a is the Iceberg catalog name managed by Gravitino
+
+USE iceberg_a;
+
+CREATE DATABASE IF NOT EXISTS mydatabase;
+USE mydatabase;
+
+CREATE TABLE sample (
+    id BIGINT COMMENT 'unique id',
+    data STRING NOT NULL
+) PARTITIONED BY (data) 
+WITH ('format-version'='2');
+
+INSERT INTO sample
+VALUES (1, 'A'), (2, 'B');
+
+SELECT * FROM sample WHERE data = 'B';
+
+```
+
+## Catalog properties
+
+The Gravitino Flink connector transforms the following properties in a catalog 
to Flink connector configuration.
+
+| Gravitino catalog property name | Flink Iceberg connector configuration | 
Description                                                                     
                                                                                
                                                    | Since Version     |
+|---------------------------------|---------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
+| `catalog-backend`               | `catalog-type`                        | 
Catalog backend type                                                            
                                                                                
                                                    | 0.8.0-incubating  |
+| `uri`                           | `uri`                                 | 
Catalog backend URI                                                             
                                                                                
                                                    | 0.8.0-incubating  |
+| `warehouse`                     | `warehouse`                           | 
Catalog backend warehouse                                                       
                                                                                
                                                    | 0.8.0-incubating  |
+| `io-impl`                       | `io-impl`                             | 
The IO implementation for `FileIO` in Iceberg.                                  
                                                                                
                                                    | 0.8.0-incubating  |
+| `oss-endpoint`                  | `oss.endpoint`                        | 
The endpoint of Aliyun OSS service.                                             
                                                                                
                                                    | 0.8.0-incubating  |

Review Comment:
   in 0.8, Gravitino relax the constrait for the secret key to pass the client, 
could you add OSS AKSK too?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@gravitino.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to