yihua commented on code in PR #12772:
URL: https://github.com/apache/hudi/pull/12772#discussion_r2074397130


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/HoodieSqlCommonUtils.scala:
##########
@@ -327,7 +328,7 @@ object HoodieSqlCommonUtils extends SparkAdapterSupport {
       val duplicateColumns = lowerPartColNames.groupBy(identity).collect {
         case (x, ys) if ys.length > 1 => s"`$x`"
       }
-      throw new AnalysisException(
+      throw new HoodieAnalysisException(

Review Comment:
   Same on `HoodieAnalysisException` and in other classes



##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/DataSkippingUtils.scala:
##########
@@ -405,9 +407,9 @@ object DataSkippingUtils extends Logging {
       case Alias(c, _) => getTargetColNameParts(c)
       case GetStructField(c, _, Some(name)) => getTargetColNameParts(c) :+ name
       case ex: ExtractValue =>
-        throw new AnalysisException(s"convert reference to name failed, 
Updating nested fields is only supported for StructType: ${ex}.")
+        throw new HoodieAnalysisException(s"convert reference to name failed, 
Updating nested fields is only supported for StructType: ${ex}.")

Review Comment:
   Same here on `HoodieAnalysisException`



##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala:
##########
@@ -152,7 +153,7 @@ class HoodieCatalogTable(val spark: SparkSession, var 
table: CatalogTable) exten
     } else if (table.schema.nonEmpty) {
       addMetaFields(table.schema)
     } else {
-      throw new AnalysisException(
+      throw new HoodieAnalysisException(

Review Comment:
   If not necessary, let's revert the change of adding 
`HoodieAnalysisException`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to