[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17007879#comment-17007879 ]
Hive QA commented on HIVE-22595: -------------------------------- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12989911/HIVE-22595.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 17787 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/20073/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20073/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20073/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12989911 - PreCommit-HIVE-Build > Dynamic partition inserts fail on Avro table table with external schema > ----------------------------------------------------------------------- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers > Reporter: Jason Dere > Assignee: Jason Dere > Priority: Major > Attachments: HIVE-22595.1.patch, HIVE-22595.2.patch, > HIVE-22595.3.patch > > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The last statement fails with the following error: > {noformat} > ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : > attempt_1575484789169_0003_4_00_000000_3:java.lang.RuntimeException: > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) > ... 19 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Number of input > columns was different than output columns (in = 2 vs out = 1 > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1047) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) > at > org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:153) > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555) > ... 20 more > Caused by: org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Number of > input columns was different than output columns (in = 2 vs out = 1 > at > org.apache.hadoop.hive.serde2.avro.AvroSerializer.serialize(AvroSerializer.java:77) > at > org.apache.hadoop.hive.serde2.avro.AvroSerDe.serialize(AvroSerDe.java:223) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:951) > ... 29 more > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)