Hello spark-devs,
we hit a similar case with SPARK-28098 when we tried to read a parquet
format table which is generated by hive union operation, and I made a quick
fix for it.

I'm not sure we should reuse the same configuration with hive or add a new
one.

And this is my first time to contribute codes to Spark, looks like I need a
committer to authorize the testing flow.
Any feedback is appreciated!

Reply via email to