----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24221/#review49565 -----------------------------------------------------------
Ship it! A few more minor code style issues, apart from that it looks good. Thank you! ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java <https://reviews.apache.org/r/24221/#comment86711> Wrong indentation (3 -> 2) ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java <https://reviews.apache.org/r/24221/#comment86712> no need to wrap ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java <https://reviews.apache.org/r/24221/#comment86713> missing space ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java <https://reviews.apache.org/r/24221/#comment86714> missing space - Lars Francke On Aug. 5, 2014, 5:32 a.m., chengxiang li wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/24221/ > ----------------------------------------------------------- > > (Updated Aug. 5, 2014, 5:32 a.m.) > > > Review request for hive, Brock Noland, Lars Francke, and Szehon Ho. > > > Bugs: HIVE-7567 > https://issues.apache.org/jira/browse/HIVE-7567 > > > Repository: hive-git > > > Description > ------- > > support automatic adjusting reducer number same as MR, configure through 3 > following parameters: > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer=<number> > In order to limit the maximum number of reducers: > set hive.exec.reducers.max=<number> > In order to set a constant number of reducers: > set mapreduce.job.reduces=<number> > > > Diffs > ----- > > ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java > abd4718 > ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SortByShuffler.java > f262065 > ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java > 73553ee > ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 > ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf > > ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java > PRE-CREATION > ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java > 75a1033 > > ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java > PRE-CREATION > ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java > 3840318 > > Diff: https://reviews.apache.org/r/24221/diff/ > > > Testing > ------- > > > Thanks, > > chengxiang li > >