[ https://issues.apache.org/jira/browse/HIVE-23230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17089003#comment-17089003 ]
Hive QA commented on HIVE-23230: -------------------------------- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/13000674/HIVE-23230.2.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 17135 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testComplexQuery (batchId=215) org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testMultipleBatchesOfComplexTypes (batchId=215) org.apache.hive.jdbc.TestJdbcWithMiniLlapRow.testComplexQuery (batchId=216) org.apache.hive.jdbc.TestJdbcWithMiniLlapRow.testMultipleBatchesOfComplexTypes (batchId=216) org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow.testComplexQuery (batchId=218) org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow.testLlapInputFormatEndToEnd (batchId=218) org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow.testMultipleBatchesOfComplexTypes (batchId=218) org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow.testTypesNestedInListWithLimitAndFilters (batchId=218) org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow.testTypesNestedInMapWithLimitAndFilters (batchId=218) org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrowBatch.testLlapInputFormatEndToEndWithMultipleBatches (batchId=216) org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrowBatch.testMultipleBatchesOfComplexTypes (batchId=216) org.apache.hive.jdbc.TestNewGetSplitsFormat.testComplexQuery (batchId=216) org.apache.hive.jdbc.TestNewGetSplitsFormat.testMultipleBatchesOfComplexTypes (batchId=216) org.apache.hive.jdbc.TestNewGetSplitsFormatReturnPath.testComplexQuery (batchId=218) org.apache.hive.jdbc.TestNewGetSplitsFormatReturnPath.testMultipleBatchesOfComplexTypes (batchId=218) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21831/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21831/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21831/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 13000674 - PreCommit-HIVE-Build > "get_splits" udf ignores limit constraint while creating splits > --------------------------------------------------------------- > > Key: HIVE-23230 > URL: https://issues.apache.org/jira/browse/HIVE-23230 > Project: Hive > Issue Type: Bug > Components: HiveServer2 > Affects Versions: 3.1.0 > Reporter: Adesh Kumar Rao > Assignee: Adesh Kumar Rao > Priority: Major > Attachments: HIVE-23230.1.patch, HIVE-23230.2.patch, HIVE-23230.patch > > > Issue: Running the query {noformat}select * from <table> limit n{noformat} > from spark via hive warehouse connector may return more rows than "n". > This happens because "get_splits" udf creates splits ignoring the limit > constraint. These splits when submitted to multiple llap daemons will return > "n" rows each. > How to reproduce: Needs spark-shell, hive-warehouse-connector and hive on > llap with more that 1 llap daemons running. > run below commands via beeline to create and populate the table > > {noformat} > create table test (id int); > insert into table test values (1); > insert into table test values (2); > insert into table test values (3); > insert into table test values (4); > insert into table test values (5); > insert into table test values (6); > insert into table test values (7); > delete from test where id = 7;{noformat} > now running below query via spark-shell > {noformat} > import com.hortonworks.hwc.HiveWarehouseSession > val hive = HiveWarehouseSession.session(spark).build() > hive.executeQuery("select * from test limit 1").show() > {noformat} > will return more than 1 rows. -- This message was sent by Atlassian Jira (v8.3.4#803005)