Hi,

SparkTableUtil can be helpful for migrating existing Spark tables into Iceberg. 
Right now, SparkTableUtil assumes that the partition information is always 
tracked in Hive metastore.

What about extending SparkTableUtil to handle Spark tables that don’t rely on 
Hive metastore? I have a local prototype that makes use of Spark 
InMemoryFileIndex to infer the partitioning info.

Thanks,
Anton

Reply via email to