[ https://issues.apache.org/jira/browse/HIVE-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15416644#comment-15416644 ]
Hive QA commented on HIVE-14270: -------------------------------- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12823131/HIVE-14270.6.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10410 tests executed *Failed tests:* {noformat} TestMsgBusConnection - did not produce a TEST-*.xml file TestQueryLifeTimeHook - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_load_dyn_part1 org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/848/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/848/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-848/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12823131 - PreCommit-HIVE-MASTER-Build > Write temporary data to HDFS when doing inserts on tables located on S3 > ----------------------------------------------------------------------- > > Key: HIVE-14270 > URL: https://issues.apache.org/jira/browse/HIVE-14270 > Project: Hive > Issue Type: Sub-task > Reporter: Sergio Peña > Assignee: Sergio Peña > Attachments: HIVE-14270.1.patch, HIVE-14270.2.patch, > HIVE-14270.3.patch, HIVE-14270.4.patch, HIVE-14270.5.patch, HIVE-14270.6.patch > > > Currently, when doing INSERT statements on tables located at S3, Hive writes > and reads temporary (or intermediate) files to S3 as well. > If HDFS is still the default filesystem on Hive, then we can keep such > temporary files on HDFS to keep things run faster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)