Hi all,

I wanted to bring up a suggestion regarding our current documentation. The
existing examples for Iceberg often use the Hadoop catalog, as seen in:

   - Adding a Catalog - Spark Quickstart [1]
   - Adding Catalogs - Spark Getting Started [2]

Since we generally advise against using Hadoop catalogs in production
environments, I believe it would be beneficial to replace these examples
with ones that use the JDBC catalog. The JDBC catalog, configured with a
local SQLite database file, offers similar convenience but aligns better
with production best practices.

I've created an issue [3] and a PR [4] to address this. Please take a look,
and I'd love to hear your thoughts on whether this is a direction we want
to pursue.

Best,
Kevin Liu

[1] https://iceberg.apache.org/spark-quickstart/#adding-a-catalog
[2]
https://iceberg.apache.org/docs/nightly/spark-getting-started/#adding-catalogs
[3] https://github.com/apache/iceberg/issues/11284
[4] https://github.com/apache/iceberg/pull/11285

Reply via email to