This is an automated email from the ASF dual-hosted git repository.

fanng pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/gravitino.git


The following commit(s) were added to refs/heads/main by this push:
     new 5aa8918766 [#7895]improvement(flink-connector): update flink connector 
document to make it better (#7920)
5aa8918766 is described below

commit 5aa8918766e6be489e7550ef56ec63011d0221f5
Author: Shaofeng Shi <[email protected]>
AuthorDate: Wed Aug 6 12:59:02 2025 +0800

    [#7895]improvement(flink-connector): update flink connector document to 
make it better (#7920)
    
    ### What changes were proposed in this pull request?
    
    The flink document has something wrong, which may make the user
    confusing.
    
    ### Why are the changes needed?
    
    1. Use the correct grammer as Flink document listed :
    
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/use/#use-catalog
    2. Add the instruction on putting jar files.
    3. Add the set batch command in each markdown file, so that Flink won't
    report error.
    
    Fix: #7895
    
    ### Does this PR introduce _any_ user-facing change?
    
    No, only documentation.
    
    ### How was this patch tested?
    
    Tested by manual, with latest 0.9.1 release and Flink 1.18 version.
---
 docs/flink-connector/flink-catalog-hive.md | 16 +++++++++++-----
 docs/flink-connector/flink-catalog-jdbc.md |  7 +++++++
 docs/flink-connector/flink-connector.md    | 17 ++++++++++++-----
 3 files changed, 30 insertions(+), 10 deletions(-)

diff --git a/docs/flink-connector/flink-catalog-hive.md 
b/docs/flink-connector/flink-catalog-hive.md
index 9fc9349e35..057cd4d0d0 100644
--- a/docs/flink-connector/flink-catalog-hive.md
+++ b/docs/flink-connector/flink-catalog-hive.md
@@ -39,18 +39,24 @@ USE CATALOG hive_a;
 CREATE DATABASE IF NOT EXISTS mydatabase;
 USE mydatabase;
 
+SET 'execution.runtime-mode' = 'batch';
+-- [INFO] Execute statement succeed.
+
+SET 'sql-client.execution.result-mode' = 'tableau';
+-- [INFO] Execute statement succeed.
+
 // Create table
 CREATE TABLE IF NOT EXISTS employees (
     id INT,
     name STRING,
-    date INT
+    dt INT
 )
-PARTITIONED BY (date);
+PARTITIONED BY (dt);
 
-DESC TABLE EXTENDED employees;
+DESC EXTENDED employees;
 
-INSERT INTO TABLE employees VALUES (1, 'John Doe', 20240101), (2, 'Jane 
Smith', 20240101);
-SELECT * FROM employees WHERE date = '20240101';
+INSERT INTO employees VALUES (1, 'John Doe', 20240101), (2, 'Jane Smith', 
20240101);
+SELECT * FROM employees WHERE dt = 20240101;
 ```
 
 ## Catalog properties
diff --git a/docs/flink-connector/flink-catalog-jdbc.md 
b/docs/flink-connector/flink-catalog-jdbc.md
index 4414500f4e..48c59c4091 100644
--- a/docs/flink-connector/flink-catalog-jdbc.md
+++ b/docs/flink-connector/flink-catalog-jdbc.md
@@ -23,6 +23,13 @@ Place the following JAR files in the lib directory of your 
Flink installation:
 - 
[`gravitino-flink-connector-runtime-1.18_2.12-${gravitino-version}.jar`](https://mvnrepository.com/artifact/org.apache.gravitino/gravitino-flink-connector-runtime-1.18)
 - JDBC driver
 
+Next, when you create the JDBC catalog in Gravitino, add the 
`flink.bypass.default-database` property with the value of the default database 
name.
+
+
+```text
+flink.bypass.default-database=db  
+```
+
 ### SQL Example
 
 ```sql
diff --git a/docs/flink-connector/flink-connector.md 
b/docs/flink-connector/flink-connector.md
index 84067ecbeb..48c1db410a 100644
--- a/docs/flink-connector/flink-connector.md
+++ b/docs/flink-connector/flink-connector.md
@@ -27,6 +27,7 @@ This capability allows users to perform federation queries, 
accessing data from
 ## How to use it
 
 1. [Build](../how-to-build.md) or 
[download](https://mvnrepository.com/artifact/org.apache.gravitino/gravitino-flink-connector-runtime-1.18)
 the Gravitino flink connector runtime jar, and place it to the classpath of 
Flink.
+
 2. Configure the Flink configuration to use the Gravitino flink connector.
 
 | Property                                         | Type   | Default Value    
 | Description                                                          | 
Required | Since Version    |
@@ -38,28 +39,34 @@ This capability allows users to perform federation queries, 
accessing data from
 Set the flink configuration in flink-conf.yaml.
 ```yaml
 table.catalog-store.kind: gravitino
-table.catalog-store.gravitino.gravitino.metalake: test
+table.catalog-store.gravitino.gravitino.metalake: metalake_demo
 table.catalog-store.gravitino.gravitino.uri: http://localhost:8090
 ```
 Or you can set the flink configuration in the `TableEnvironment`.
 ```java
 final Configuration configuration = new Configuration();
 configuration.setString("table.catalog-store.kind", "gravitino");
-configuration.setString("table.catalog-store.gravitino.gravitino.metalake", 
"test");
+configuration.setString("table.catalog-store.gravitino.gravitino.metalake", 
"metalake_demo");
 configuration.setString("table.catalog-store.gravitino.gravitino.uri", 
"http://localhost:8090";);
 EnvironmentSettings.Builder builder = 
EnvironmentSettings.newInstance().withConfiguration(configuration);
 TableEnvironment tableEnv = 
TableEnvironment.create(builder.inBatchMode().build());
 ```
 
-3. Execute the Flink SQL query.
+3. Add necessary jar files to Flink's classpath.
+
+To run Flink with Gravitino connector and then access the data source like 
Hive, you may need to put additional jars to Flink's classpath. You can refer 
to the [Flink 
document](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/hive/overview/#dependencies)
 for more information.
+
+4. Execute the Flink SQL query.
 
-Suppose there is only one hive catalog with the name `hive` in the metalake 
`test`.
+Suppose there is only one hive catalog with the name `catalog_hive` in the 
metalake `metalake_demo`.
 
 ```sql
 // use hive catalog
-USE hive;
+USE CATALOG catalog_hive;
 CREATE DATABASE db;
 USE db;
+SET 'execution.runtime-mode' = 'batch';
+SET 'sql-client.execution.result-mode' = 'tableau';
 CREATE TABLE hive_students (id INT, name STRING);
 INSERT INTO hive_students VALUES (1, 'Alice'), (2, 'Bob');
 SELECT * FROM hive_students;

Reply via email to