xiarixiaoyao commented on a change in pull request #4587:
URL: https://github.com/apache/hudi/pull/4587#discussion_r783934791



##########
File path: 
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/TestAlterTable.scala
##########
@@ -53,24 +71,26 @@ class TestAlterTable extends TestHoodieSqlBase {
         assertResult(true) (
           spark.sessionState.catalog.tableExists(new 
TableIdentifier(newTableName))
         )
+
         val hadoopConf = spark.sessionState.newHadoopConf()
         val metaClient = HoodieTableMetaClient.builder().setBasePath(tablePath)
           .setConf(hadoopConf).build()
-        assertResult(newTableName) (
-          metaClient.getTableConfig.getTableName
-        )
+        assertResult(newTableName) (metaClient.getTableConfig.getTableName)
+
+        // insert some data
         spark.sql(s"insert into $newTableName values(1, 'a1', 10, 1000)")
 
-        // Add table column
+        // add column
         spark.sql(s"alter table $newTableName add columns(ext0 string)")
-        val table = spark.sessionState.catalog.getTableMetadata(new 
TableIdentifier(newTableName))
+        catalogTable = spark.sessionState.catalog.getTableMetadata(new 
TableIdentifier(newTableName))
         assertResult(Seq("id", "name", "price", "ts", "ext0")) {
-          HoodieSqlUtils.removeMetaFields(table.schema).fields.map(_.name)
+          
HoodieSqlUtils.removeMetaFields(catalogTable.schema).fields.map(_.name)
         }
         checkAnswer(s"select id, name, price, ts, ext0 from $newTableName")(
           Seq(1, "a1", 10.0, 1000, null)
         )
-        // Alter table column type
+
+        // change column's data type
         spark.sql(s"alter table $newTableName change column id id bigint")

Review comment:
       now,  hudi on spark  cannot support dataType change。 hudi use spark 
parquetFileFormat to read parquet file,but that reader  is hardly support type 
change。 see the origin code of spark project 
**ParquetVectorUpdaterFactory.getUpdater**
   This test is actually wrong,  if you add **spark.sql(s"select id from 
$newTableName").show(false)** in line 95, this test will failed。  
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to