[ https://issues.apache.org/jira/browse/HIVE-20512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16681769#comment-16681769 ]
Hive QA commented on HIVE-20512: -------------------------------- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12947442/HIVE-20512.9.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15501 tests executed *Failed tests:* {noformat} TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=189) [infer_bucket_sort_reducers_power_two.q,list_bucket_dml_10.q,orc_merge9.q,leftsemijoin_mr.q,bucket6.q,bucketmapjoin7.q,uber_reduce.q,empty_dir_in_table.q,vector_outer_join2.q,spark_explain_groupbyshuffle.q,spark_dynamic_partition_pruning.q,spark_combine_equivalent_work.q,orc_merge1.q,spark_use_op_stats.q,orc_merge_diff_fs.q,quotedid_smb.q,truncate_column_buckets.q,spark_vectorized_dynamic_partition_pruning.q,spark_in_process_launcher.q,orc_merge3.q] TestSparkCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=128) [load_dyn_part15.q,explaindenpendencydiffengs.q,transform2.q,groupby5.q,cbo_semijoin.q,bucketmapjoin13.q,alter_merge_stats_orc.q,subquery_scalar.q,union_remove_2.q,groupby_position.q,join12.q,smb_mapjoin_8.q,subquery_select.q,join21.q,auto_join16.q] {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/14836/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14836/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14836/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12947442 - PreCommit-HIVE-Build > Improve record and memory usage logging in SparkRecordHandler > ------------------------------------------------------------- > > Key: HIVE-20512 > URL: https://issues.apache.org/jira/browse/HIVE-20512 > Project: Hive > Issue Type: Sub-task > Components: Spark > Reporter: Sahil Takiar > Assignee: Bharathkrishna Guruvayoor Murali > Priority: Major > Attachments: HIVE-20512.1.patch, HIVE-20512.2.patch, > HIVE-20512.3.patch, HIVE-20512.4.patch, HIVE-20512.5.patch, > HIVE-20512.6.patch, HIVE-20512.7.patch, HIVE-20512.8.patch, HIVE-20512.9.patch > > > We currently log memory usage and # of records processed in Spark tasks, but > we should improve the methodology for how frequently we log this info. > Currently we use the following code: > {code:java} > private long getNextLogThreshold(long currentThreshold) { > // A very simple counter to keep track of number of rows processed by the > // reducer. It dumps > // every 1 million times, and quickly before that > if (currentThreshold >= 1000000) { > return currentThreshold + 1000000; > } > return 10 * currentThreshold; > } > {code} > The issue is that after a while, the increase by 10x factor means that you > have to process a huge # of records before this gets triggered. > A better approach would be to log this info at a given interval. This would > help in debugging tasks that are seemingly hung. -- This message was sent by Atlassian JIRA (v7.6.3#76005)