----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25394/ -----------------------------------------------------------
(Updated Sept. 20, 2014, 12:04 a.m.) Review request for hive, Brock Noland and Xuefu Zhang. Changes ------- Made some changes according to suggestions by Xuefu, also added more comments. Bugs: HIVE-7503 https://issues.apache.org/jira/browse/HIVE-7503 Repository: hive-git Description ------- For Hive's multi insert query (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML), there may be an MR job for each insert. When we achieve this with Spark, it would be nice if all the inserts can happen concurrently. It seems that this functionality isn't available in Spark. To make things worse, the source of the insert may be re-computed unless it's staged. Even with this, the inserts will happen sequentially, making the performance suffer. This task is to find out what takes in Spark to enable this without requiring staging the source and sequential insertion. If this has to be solved in Hive, find out an optimum way to do this. Diffs (updated) ----- ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkProcContext.java 4211a07 ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 695d8b9 ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkWork.java 864965e ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 76fc290 ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkMergeTaskProcessor.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkMultiInsertionProcessor.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkProcessAnalyzeTable.java 5fcaf64 ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkTableScanProcessor.java PRE-CREATION Diff: https://reviews.apache.org/r/25394/diff/ Testing ------- Thanks, Chao Sun