[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858822#comment-15858822 ]
Hive QA commented on HIVE-14901: -------------------------------- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12851753/HIVE-14901.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10244 tests executed *Failed tests:* {noformat} TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) (batchId=235) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_join_with_different_encryption_keys] (batchId=159) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=223) org.apache.hive.spark.client.rpc.TestRpc.testClientTimeout (batchId=274) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3449/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3449/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3449/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12851753 - PreCommit-HIVE-Build > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > -------------------------------------------------------------------------------- > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC > Affects Versions: 2.1.0 > Reporter: Vaibhav Gumashta > Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)