BenjMaq opened a new issue #4208:
URL: https://github.com/apache/hudi/issues/4208


   **Describe the problem you faced**
   
   I'm trying to run a Spark SQL statement to alter a Hudi table. At some point 
my Spark job fails and the errror
   `java.lang.NoSuchMethodException: 
org.apache.hadoop.hive.ql.metadata.Hive.alterTable(java.lang.String, 
org.apache.hadoop.hive.ql.metadata.Table)` is thrown.
   
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1. Run `alter table <TABLE> add columns(test string);`
   2. Error is thrown
   
   **Expected behavior**
   
   Table schema is correctly updated
   
   **Environment Description**
   
   * Hudi version : 0.9.0
   
   * Spark version : 2.4.4
   
   * Hive version : 2.3.5
   
   * Hadoop version : (`hadoop-aws` 2.7.3)
   
   * Storage (HDFS/S3/GCS..) : S3
   
   * Running on Docker? (yes/no) : No
   
   **Stacktrace**
   
   ```
   Exception in thread "main" java.lang.NoSuchMethodException: 
org.apache.hadoop.hive.ql.metadata.Hive.alterTable(java.lang.String, 
org.apache.hadoop.hive.ql.metadata.Table)
        at java.lang.Class.getMethod(Class.java:1786)
        at org.apache.spark.sql.hive.client.Shim.findMethod(HiveShim.scala:163)
        at 
org.apache.spark.sql.hive.client.Shim_v0_12.alterTableMethod$lzycompute(HiveShim.scala:255)
        at 
org.apache.spark.sql.hive.client.Shim_v0_12.alterTableMethod(HiveShim.scala:254)
        at 
org.apache.spark.sql.hive.client.Shim_v0_12.alterTable(HiveShim.scala:400)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$alterTableDataSchema$1.apply$mcV$sp(HiveClientImpl.scala:534)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$alterTableDataSchema$1.apply(HiveClientImpl.scala:513)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$alterTableDataSchema$1.apply(HiveClientImpl.scala:513)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:275)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:213)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:212)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:258)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.alterTableDataSchema(HiveClientImpl.scala:513)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$alterTableDataSchema$1.apply$mcV$sp(HiveExternalCatalog.scala:671)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$alterTableDataSchema$1.apply(HiveExternalCatalog.scala:650)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$alterTableDataSchema$1.apply(HiveExternalCatalog.scala:650)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog.alterTableDataSchema(HiveExternalCatalog.scala:650)
        at 
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.alterTableDataSchema(ExternalCatalogWithListener.scala:124)
        at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.alterTableDataSchema(SessionCatalog.scala:391)
        at 
org.apache.spark.sql.hudi.command.AlterHoodieTableAddColumnsCommand.refreshSchemaInMeta(AlterHoodieTableAddColumnsCommand.scala:93)
        at 
org.apache.spark.sql.hudi.command.AlterHoodieTableAddColumnsCommand.run(AlterHoodieTableAddColumnsCommand.scala:72)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
        at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
        at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
        at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370)
        at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369)
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
        at 
com.twilio.optimustranformer.OptimusTranformer$$anonfun$main$1$$anonfun$apply$mcV$sp$1.apply(OptimusTranformer.scala:75)
        at 
com.twilio.optimustranformer.OptimusTranformer$$anonfun$main$1$$anonfun$apply$mcV$sp$1.apply(OptimusTranformer.scala:73)
        at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at 
com.twilio.optimustranformer.OptimusTranformer$$anonfun$main$1.apply$mcV$sp(OptimusTranformer.scala:72)
        at scala.util.control.Breaks.breakable(Breaks.scala:38)
        at 
com.twilio.optimustranformer.OptimusTranformer$.main(OptimusTranformer.scala:71)
        at 
com.twilio.optimustranformer.OptimusTranformer.main(OptimusTranformer.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
        at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to