[ https://issues.apache.org/jira/browse/HIVE-19186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16438329#comment-16438329 ]
Hive QA commented on HIVE-19186: -------------------------------- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12918808/HIVE-19186.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 86 failed/errored test(s), 14150 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed out) (batchId=247) [truncate_table_failure4].[truncate_table_failure4] (batchId=95) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[default_constraint_invalid_default_value_type] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_bucketmapjoin] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_mapjoin_14] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[sortmerge_mapjoin_mismatch_1] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_2] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_in_agg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_implicit_gby] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_bucketed_column] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_column_list_bucketing] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_column_seqfile] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_nonexistant_column] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_partition_column2] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_partition_column] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_table_failure1] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_table_failure2] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_table_failure6] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udaf_invalid_place] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_if_wrong_args_len] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_in] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_instr_wrong_args_len] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_invalid] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_likeall_wrong1] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_likeany_wrong1] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_map_keys_arg_num] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_map_keys_arg_type] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_map_values_arg_type] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_max] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_min] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_next_day_error_1] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_next_day_error_2] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_nonexistent_resource] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_printf_wrong4] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_sort_array_by_wrong1] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_sort_array_wrong1] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_sort_array_wrong2] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_sort_array_wrong3] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_trunc_error1] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_trunc_error2] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_trunc_error3] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udtf_explode_not_supported4] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udtf_not_supported1] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udtf_not_supported3] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[union2] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[unionSortBy] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[uniquejoin3] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[uniquejoin] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[unset_table_property] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[updateBasicStats] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_bucket_col] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_non_acid_table] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_partition_col] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[view_update] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[windowing_invalid_udaf] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[windowing_leadlag_in_udaf] (batchId=96) org.apache.hadoop.hive.metastore.TestMetastoreVersion.testMetastoreVersion (batchId=227) org.apache.hadoop.hive.metastore.TestMetastoreVersion.testVersionMatching (batchId=227) org.apache.hadoop.hive.metastore.TestStats.partitionedTableInHiveCatalog (batchId=211) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hive.jdbc.TestActivePassiveHA.testClientConnectionsOnFailover (batchId=242) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=238) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveBackKill (batchId=242) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10200/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10200/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10200/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 86 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12918808 - PreCommit-HIVE-Build > Multi Table INSERT statements query has a flaw for partitioned table when > INSERT INTO and INSERT OVERWRITE are used > ------------------------------------------------------------------------------------------------------------------- > > Key: HIVE-19186 > URL: https://issues.apache.org/jira/browse/HIVE-19186 > Project: Hive > Issue Type: Bug > Components: Query Planning > Affects Versions: 3.0.0 > Reporter: Steve Yeom > Assignee: Steve Yeom > Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19186.01.patch > > > One problem test case is: > create table intermediate(key int) partitioned by (p int) stored as orc; > insert into table intermediate partition(p='455') select distinct key from > src where key >= 0 order by key desc limit 2; > insert into table intermediate partition(p='456') select distinct key from > src where key is not null order by key asc limit 2; > insert into table intermediate partition(p='457') select distinct key from > src where key >= 100 order by key asc limit 2; > create table multi_partitioned (key int, key2 int) partitioned by (p int); > from intermediate > insert into table multi_partitioned partition(p=2) select p, key > insert overwrite table multi_partitioned partition(p=1) select key, p; -- This message was sent by Atlassian JIRA (v7.6.3#76005)