This is an automated email from the ASF dual-hosted git repository.

roryqi pushed a commit to branch ISSUE-6353
in repository https://gitbox.apache.org/repos/asf/gravitino.git

commit 4a864c69c118e4f0b93209320f620125b1d83c43
Author: Qi Yu <y...@datastrato.com>
AuthorDate: Wed Jan 15 22:21:08 2025 +0800

    [MINOR] fix(docs): Fix several document errors (#6251)
    
    ### What changes were proposed in this pull request?
    
    Fix several errors in the document about hadoop-catalog and hive-catalog
    
    ### Why are the changes needed?
    
    Improving the user experience.
    
    ### Does this PR introduce _any_ user-facing change?
    
    N/A.
    
    ### How was this patch tested?
    
    N/A.
---
 docs/hadoop-catalog-with-gcs.md         |  2 +-
 docs/hadoop-catalog-with-oss.md         |  5 ++---
 docs/hive-catalog-with-cloud-storage.md | 11 ++++++++---
 3 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/docs/hadoop-catalog-with-gcs.md b/docs/hadoop-catalog-with-gcs.md
index 5422047efd..29465c2549 100644
--- a/docs/hadoop-catalog-with-gcs.md
+++ b/docs/hadoop-catalog-with-gcs.md
@@ -47,7 +47,7 @@ Refer to [Fileset 
configurations](./hadoop-catalog.md#fileset-properties) for mo
 
 This section will show you how to use the Hadoop catalog with GCS in 
Gravitino, including detailed examples.
 
-### Create a Hadoop catalog with GCS
+### Step1: Create a Hadoop catalog with GCS
 
 First, you need to create a Hadoop catalog with GCS. The following example 
shows how to create a Hadoop catalog with GCS:
 
diff --git a/docs/hadoop-catalog-with-oss.md b/docs/hadoop-catalog-with-oss.md
index b9ef5f44e2..f330f7ede9 100644
--- a/docs/hadoop-catalog-with-oss.md
+++ b/docs/hadoop-catalog-with-oss.md
@@ -123,7 +123,7 @@ oss_catalog = 
gravitino_client.create_catalog(name="test_catalog",
 </TabItem>
 </Tabs>
 
-Step 2: Create a Schema
+### Step 2: Create a Schema
 
 Once the Hadoop catalog with OSS is created, you can create a schema inside 
that catalog. Below are examples of how to do this:
 
@@ -174,11 +174,10 @@ catalog.as_schemas().create_schema(name="test_schema",
 </Tabs>
 
 
-### Create a fileset
+### Step3: Create a fileset
 
 Now that the schema is created, you can create a fileset inside it. Here’s how:
 
-
 <Tabs groupId="language" queryString>
 <TabItem value="shell" label="Shell">
 
diff --git a/docs/hive-catalog-with-cloud-storage.md 
b/docs/hive-catalog-with-cloud-storage.md
index 49a018907b..b1403ba5e1 100644
--- a/docs/hive-catalog-with-cloud-storage.md
+++ b/docs/hive-catalog-with-cloud-storage.md
@@ -1,8 +1,8 @@
 ---
-title: "Hive catalog with s3 and adls"
+title: "Hive catalog with S3, ADLS and GCS"
 slug: /hive-catalog
 date: 2024-9-24
-keyword: Hive catalog cloud storage S3 ADLS
+keyword: Hive catalog cloud storage S3 ADLS GCS
 license: "This software is licensed under the Apache License version 2."
 ---
 
@@ -84,8 +84,13 @@ cp ${HADOOP_HOME}/share/hadoop/tools/lib/*aws* 
${HIVE_HOME}/lib
 
 # For Azure Blob Storage(ADLS)
 cp ${HADOOP_HOME}/share/hadoop/tools/lib/*azure* ${HIVE_HOME}/lib
+
+# For Google Cloud Storage(GCS)
+cp gcs-connector-hadoop3-2.2.22-shaded.jar ${HIVE_HOME}/lib
 ```
 
+[`gcs-connector-hadoop3-2.2.22-shaded.jar`](https://github.com/GoogleCloudDataproc/hadoop-connectors/releases/download/v2.2.22/gcs-connector-hadoop2-2.2.22-shaded.jar)
 is the bundle jar that contains Hadoop GCS connector, you need to choose the 
corresponding gcs connector jar for the version of Hadoop you are using.
+
 Alternatively, you can download the required JARs from the Maven repository 
and place them in the Hive classpath. It is crucial to verify that the JARs are 
compatible with the version of Hadoop you are using to avoid any compatibility 
issue.
 
 ### Restart Hive metastore
@@ -265,7 +270,7 @@ To access S3-stored tables using Spark, you need to 
configure the SparkSession a
     sparkSession.sql("...");
 ```
 
-:::Note
+:::note
 Please download [Hadoop AWS 
jar](https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws), [aws 
java sdk 
jar](https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bundle) and 
place them in the classpath of the Spark. If the JARs are missing, Spark will 
not be able to access the S3 storage.
 Azure Blob Storage(ADLS) requires the [Hadoop Azure 
jar](https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure), [Azure 
cloud sdk jar](https://mvnrepository.com/artifact/com.azure/azure-storage-blob) 
to be placed in the classpath of the Spark.
 for Google Cloud Storage(GCS), you need to download the [Hadoop GCS 
jar](https://github.com/GoogleCloudDataproc/hadoop-connectors/releases) and 
place it in the classpath of the Spark.

Reply via email to