yihua commented on code in PR #12772:
URL: https://github.com/apache/hudi/pull/12772#discussion_r2076202199


##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/procedures/TruncateTableProcedure.scala:
##########
@@ -69,12 +70,12 @@ class TruncateTableProcedure extends BaseProcedure
     val tableId = table.identifier.quotedString
 
     if (table.tableType == CatalogTableType.VIEW) {
-      throw new AnalysisException(
+      throw new HoodieAnalysisException(

Review Comment:
   Similarly, revert all these changes to make reviewer's life easier.



##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/parser/HoodieCommonSqlParser.scala:
##########
@@ -109,7 +111,12 @@ class HoodieCommonSqlParser(session: SparkSession, 
delegate: ParserInterface)
         throw e.withCommand(command)
       case e: AnalysisException =>
         val position = Origin(e.line, e.startPosition)
-        throw new ParseException(Option(command), e.message, position, 
position)
+        throw sparkAdapter.newParseException(
+          Option(command),
+          e,
+          position,
+          position

Review Comment:
   nit: put into the same line



##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/MergeIntoHoodieTableCommand.scala:
##########
@@ -148,7 +149,7 @@ case class MergeIntoHoodieTableCommand(mergeInto: 
MergeIntoTable,
    *
    * To be able to leverage Hudi's engine to merge an incoming dataset against 
the existing table
    * we will have to make sure that both [[source]] and [[target]] tables have 
the *same*
-   * "primary-key" and "precombine" columns. Since actual MIT condition might 
be leveraging an arbitrary
+   * "primary-key" and "pre-combine" columns. Since actual MIT condition might 
be leveraging an arbitrary

Review Comment:
   Avoid unnecessary changes in this class



##########
hudi-spark-datasource/hudi-spark/src/test/java/org/apache/hudi/client/functional/TestHoodieClientOnCopyOnWriteStorage.java:
##########


Review Comment:
   Let's make sure the functional tests are still run.



##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/model/HoodieInternalRow.java:
##########
@@ -46,7 +46,7 @@
  *   allow in-place updates due to its memory layout)</li>
  * </ul>
  */
-public class HoodieInternalRow extends InternalRow {
+public abstract class HoodieInternalRow extends InternalRow {

Review Comment:
   Actually, in this PR, should we keep this class untouched and avoid any 
changes around internal row, by adding this so that the code compiles on both 
Spark 3 and 4?  Variant support should be a different discussion, and I'm not 
confident on the current approach of changing `HoodieInternalRow`.
   ```
     // @Override - no override
     public VariantVal getVariant(int ordinal) {
       throw new HoodieNotSupportedException("Variant type is not supported");
     }
   ```



##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/CreateHoodieTableAsSelectCommand.scala:
##########
@@ -100,7 +101,8 @@ case class CreateHoodieTableAsSelectCommand(
         HiveSyncConfigHolder.HIVE_TABLE_PROPERTIES.key -> 
ConfigUtils.configToString(updatedTable.properties.asJava),
         HoodieWriteConfig.COMBINE_BEFORE_INSERT.key -> "false",
         DataSourceWriteOptions.SQL_INSERT_MODE.key -> 
InsertMode.NON_STRICT.value(),
-        DataSourceWriteOptions.SQL_ENABLE_BULK_INSERT.key -> "true"
+        DataSourceWriteOptions.SQL_ENABLE_BULK_INSERT.key -> "true",
+        "path" -> tablePath

Review Comment:
   Could you dig deeper into the root cause?  This is a behavior change that 
could have side effect.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to