> On 十一月 7, 2014, 2:36 p.m., Xuefu Zhang wrote: > > 1. Looking at the diff, I was sure where we are removing unnecessary > > counter registrations. > > 2. It would be great if we can have some tests that are enabled with > > counter statistics collection, so that we know what kind of output we are > > expecting and avoid future breakage.
Thanks,xuefu.the hive operator level counters is used during spark job execution,so our current tests should have covered this patch change.but for table statistic collection, as you know that we use fs as default counter storage for test. i run qtests locally before,but yes related change would not reflect in automatic test result. - chengxiang ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/27720/#review60316 ----------------------------------------------------------- On 十一月 7, 2014, 5:27 a.m., chengxiang li wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/27720/ > ----------------------------------------------------------- > > (Updated 十一月 7, 2014, 5:27 a.m.) > > > Review request for hive and Xuefu Zhang. > > > Bugs: HIVE-8777 > https://issues.apache.org/jira/browse/HIVE-8777 > > > Repository: hive-git > > > Description > ------- > > Currently we register all hive operator counters in SparkCounters, while > actually not all hive operators are used in SparkTask, we should iterate > SparkTask's operators, and only register conuters required. > > > Diffs > ----- > > ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkClient.java e955da3 > ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java 46b04bc > ql/src/java/org/apache/hadoop/hive/ql/exec/spark/counter/SparkCounters.java > bb3597a > ql/src/java/org/apache/hadoop/hive/ql/plan/SparkWork.java 66fd6b6 > > Diff: https://reviews.apache.org/r/27720/diff/ > > > Testing > ------- > > > Thanks, > > chengxiang li > >