[ https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445502#comment-16445502 ]
Hive QA commented on HIVE-18910: -------------------------------- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919858/HIVE-18910.35.patch {color:green}SUCCESS:{color} +1 due to 28 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 42 failed/errored test(s), 14286 tests executed *Failed tests:* {noformat} TestHs2HooksWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=216) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_7] (batchId=252) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join32] (batchId=88) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input31] (batchId=62) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_non_hdfs_path] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nullgroup3] (batchId=2) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nullscript] (batchId=37) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[sample2] (batchId=6) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[sample4] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[sample7] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[sample9] (batchId=44) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_7] (batchId=60) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] (batchId=54) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=252) org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic] (batchId=252) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_4] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_into_default_keyword] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_time_window] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=166) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=104) org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_dyn_part] (batchId=93) org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_num_buckets] (batchId=93) org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_reducers_power_two] (batchId=93) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_uri_load_data] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=97) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=97) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=97) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=97) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=224) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=227) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=227) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=227) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=234) org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.testCookieNegative (batchId=253) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10362/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10362/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10362/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 42 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12919858 - PreCommit-HIVE-Build > Migrate to Murmur hash for shuffle and bucketing > ------------------------------------------------ > > Key: HIVE-18910 > URL: https://issues.apache.org/jira/browse/HIVE-18910 > Project: Hive > Issue Type: Task > Reporter: Deepak Jaiswal > Assignee: Deepak Jaiswal > Priority: Major > Attachments: HIVE-18910.1.patch, HIVE-18910.10.patch, > HIVE-18910.11.patch, HIVE-18910.12.patch, HIVE-18910.13.patch, > HIVE-18910.14.patch, HIVE-18910.15.patch, HIVE-18910.16.patch, > HIVE-18910.17.patch, HIVE-18910.18.patch, HIVE-18910.19.patch, > HIVE-18910.2.patch, HIVE-18910.20.patch, HIVE-18910.21.patch, > HIVE-18910.22.patch, HIVE-18910.23.patch, HIVE-18910.24.patch, > HIVE-18910.25.patch, HIVE-18910.26.patch, HIVE-18910.27.patch, > HIVE-18910.28.patch, HIVE-18910.29.patch, HIVE-18910.3.patch, > HIVE-18910.30.patch, HIVE-18910.31.patch, HIVE-18910.32.patch, > HIVE-18910.33.patch, HIVE-18910.34.patch, HIVE-18910.35.patch, > HIVE-18910.4.patch, HIVE-18910.5.patch, HIVE-18910.6.patch, > HIVE-18910.7.patch, HIVE-18910.8.patch, HIVE-18910.9.patch > > > Hive uses JAVA hash which is not as good as murmur for better distribution > and efficiency in bucketing a table. > Migrate to murmur hash but still keep backward compatibility for existing > users so that they dont have to reload the existing tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)