jerryshao commented on code in PR #10539:
URL: https://github.com/apache/gravitino/pull/10539#discussion_r3008023051


##########
design/aws-glue-catalog-connector.md:
##########
@@ -0,0 +1,592 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+
+# Design: AWS Glue Data Catalog Support for Apache Gravitino
+
+## 1. Problem Statement and Goals
+
+### 1.1 Problem
+
+**Gravitino currently cannot federate AWS Glue Data Catalog.** This is a 
significant gap because:
+
+1. **Large user base on AWS**: The majority of cloud-native data lakes run on 
AWS with Glue Data Catalog as the central metadata service (default for Athena, 
Redshift Spectrum, EMR, Lake Formation). These organizations cannot bring their 
Glue metadata into Gravitino's unified management layer.
+2. **No native integration path**: The only workaround is pointing Gravitino's 
Hive catalog at Glue's HMS-compatible Thrift endpoint (`metastore.uris = 
thrift://...`), which is undocumented, region-limited, and cannot leverage 
Glue-native features (catalog ID, cross-account access, VPC endpoints).
+3. **Competitive landscape**: Trino, Spark, and other engines all have 
first-class Glue support with dedicated configuration. Users expect the same 
from Gravitino.
+
+### 1.2 Goals
+
+After this feature is implemented:
+
+1. **Register AWS Glue Data Catalog in Gravitino**:
+   ```bash
+   # Hive-format tables
+   gcli catalog create --name hive_on_glue --provider hive \
+     --properties metastore-type=glue,s3-region=us-east-1
+
+   # Iceberg-format tables
+   gcli catalog create --name iceberg_on_glue --provider lakehouse-iceberg \
+     --properties 
catalog-backend=glue,warehouse=s3://bucket/iceberg,s3-region=us-east-1
+   ```
+
+2. **Standard Gravitino API works against Glue catalogs**:
+   ```bash
+   gcli schema list --catalog hive_on_glue
+   gcli table list --catalog hive_on_glue --schema my_database
+   gcli table details --catalog iceberg_on_glue --schema analytics --table 
events
+   ```
+
+3. **Trino and Spark connect transparently** — Trino uses 
`hive.metastore=glue` / `iceberg.catalog.type=glue`; Spark uses 
`AWSGlueDataCatalogHiveClientFactory` / `GlueCatalog`. Users query Glue tables 
through Gravitino without knowing the underlying mechanism.
+
+4. **AWS-native authentication** (reuses existing S3 properties): static 
credentials, STS AssumeRole, or default credential chain (environment 
variables, instance profile).
+
+## 2. Background
+
+### 2.1 AWS Glue Data Catalog
+
+AWS Glue Data Catalog is a managed metadata repository storing:
+- **Databases** — logical groupings, equivalent to Gravitino schemas.
+- **Tables** — metadata records containing column definitions, storage 
descriptors, partition keys, and user-defined parameters.
+
+Tables come in two formats:
+
+| Format | How Glue Stores It |
+|---|---|
+| **Hive** | Full metadata in `StorageDescriptor` (columns, SerDe, 
InputFormat, OutputFormat, location). The majority of tables in most Glue 
catalogs (legacy ETL, Athena CTAS, Redshift Spectrum). |
+| **Iceberg** | `Parameters["table_type"] = "ICEBERG"` and 
`Parameters["metadata_location"]` pointing to Iceberg metadata JSON on S3. 
`StorageDescriptor.Columns` is typically empty. Growing rapidly. |
+
+A complete Glue integration must handle both table formats.
+
+### 2.2 How Query Engines Use Glue
+
+Trino and Spark both have native Glue support — they call the AWS Glue SDK 
directly, not via HMS Thrift:
+
+| Engine | Hive Tables on Glue | Iceberg Tables on Glue |
+|---|---|---|
+| **Trino** | Hive connector with `hive.metastore=glue` | Iceberg connector 
with `iceberg.catalog.type=glue` |
+| **Spark** | Hive catalog with `AWSGlueDataCatalogHiveClientFactory` | 
Iceberg catalog with `catalog-impl=org.apache.iceberg.aws.glue.GlueCatalog` |
+
+Both engines use a **one-catalog-to-one-connector** model — a single catalog 
handles either Hive-format or Iceberg-format tables, not both. This is 
consistent with Gravitino's existing catalog model.
+
+### 2.3 Gravitino's Current Architecture
+
+Gravitino's catalog plugin system provides:
+- **Hive catalog** (`provider=hive`): Connects to HMS via Thrift. Client 
chain: `HiveCatalogOperations` → `CachedClientPool` → `HiveClientImpl` → 
`HiveShimV2/V3` → `IMetaStoreClient`.
+- **Iceberg catalog** (`provider=lakehouse-iceberg`): Supports pluggable 
backends (`catalog-backend=hive|jdbc|rest|memory|custom`). Each backend maps to 
a different Iceberg `Catalog` implementation.
+- **Trino/Spark connectors**: Property converters translate Gravitino catalog 
properties into engine-specific properties.
+
+## 3. Design Alternatives
+
+### Alternative A: New `catalog-glue` Module

Review Comment:
   Hi @markhoerth so you are more inclined to solution A, right?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to