[ https://issues.apache.org/jira/browse/HIVE-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273611#comment-14273611 ]
Hive QA commented on HIVE-9200: ------------------------------- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12691630/HIVE-9200.05.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2339/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2339/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2339/ Messages: {noformat} **** This message was trimmed, see log for full details **** As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:401:5: Decision can match input such as "KW_BETWEEN KW_MAP LPAREN" using multiple alternatives: 8, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:401:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL KW_VIEW" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:401:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER KW_BY" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:401:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_UNION KW_ALL" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:401:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE KW_BY" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:401:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:401:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:526:5: Decision can match input such as "{AMPERSAND..BITWISEXOR, DIV..DIVIDE, EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE}" using multiple alternatives: 1, 3 As a result, alternative(s) 3 were disabled for that input [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-exec --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-exec --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 2 resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-exec --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-exec --- [INFO] Compiling 2075 source files to /data/hive-ptest/working/apache-svn-trunk-source/ql/target/classes [INFO] ------------------------------------------------------------- [WARNING] COMPILATION WARNING : [INFO] ------------------------------------------------------------- [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinBytesTableContainer.java: Some input files use or override a deprecated API. [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinBytesTableContainer.java: Recompile with -Xlint:deprecation for details. [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/plan/FetchWork.java: Some input files use unchecked or unsafe operations. [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/plan/FetchWork.java: Recompile with -Xlint:unchecked for details. [INFO] 4 warnings [INFO] ------------------------------------------------------------- [INFO] ------------------------------------------------------------- [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSortMergeJoinOptimizer.java:[79,12] no suitable method found for canConvertJoinToSMBJoin(org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.ParseContext) method org.apache.hadoop.hive.ql.optimizer.spark.SparkSortMergeJoinOptimizer.canConvertJoinToSMBJoin(org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.ParseContext,java.util.Stack<org.apache.hadoop.hive.ql.lib.Node>) is not applicable (actual and formal argument lists differ in length) method org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.canConvertJoinToSMBJoin(org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx) is not applicable (actual and formal argument lists differ in length) [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSortMergeJoinOptimizer.java:[107,33] method convertJoinToBucketMapJoin in class org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc cannot be applied to given types; required: org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx found: org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.ParseContext reason: actual and formal argument lists differ in length [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSortMergeJoinOptimizer.java:[109,13] method convertBucketMapJoinToSMBJoin in class org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc cannot be applied to given types; required: org.apache.hadoop.hive.ql.exec.MapJoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx found: org.apache.hadoop.hive.ql.exec.MapJoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.ParseContext reason: actual and formal argument lists differ in length [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkMapJoinOptimizer.java:[133,39] cannot find symbol symbol: method getJoinContext() location: variable parseContext of type org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkMapJoinOptimizer.java:[377,25] cannot find symbol symbol: method getJoinContext() location: variable parseContext of type org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/BucketMapjoinProc.java:[75,13] method checkConvertBucketMapJoin in class org.apache.hadoop.hive.ql.optimizer.AbstractBucketJoinProc cannot be applied to given types; required: org.apache.hadoop.hive.ql.optimizer.BucketJoinProcCtx,java.util.Map<java.lang.String,org.apache.hadoop.hive.ql.exec.Operator<? extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>,java.util.Map<java.lang.Byte,java.util.List<org.apache.hadoop.hive.ql.plan.ExprNodeDesc>>,java.lang.String,java.util.List<java.lang.String> found: org.apache.hadoop.hive.ql.parse.ParseContext,org.apache.hadoop.hive.ql.optimizer.BucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.QBJoinTree,java.util.Map<java.lang.Byte,java.util.List<org.apache.hadoop.hive.ql.plan.ExprNodeDesc>>,java.lang.String,java.util.List<java.lang.String> reason: actual and formal argument lists differ in length [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSkewJoinProcFactory.java:[143,34] cannot find symbol symbol: method getJoinContext() location: variable parseContext of type org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSkewJoinProcFactory.java:[145,34] cannot find symbol symbol: method getMapJoinContext() location: variable parseContext of type org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSkewJoinProcFactory.java:[147,34] cannot find symbol symbol: method getSmbMapJoinContext() location: variable parseContext of type org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSMBJoinHintOptimizer.java:[73,7] method convertBucketMapJoinToSMBJoin in class org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc cannot be applied to given types; required: org.apache.hadoop.hive.ql.exec.MapJoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx found: org.apache.hadoop.hive.ql.exec.MapJoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.ParseContext reason: actual and formal argument lists differ in length [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SparkMapJoinProcessor.java:[67,33] method convertJoinOpMapJoinOp in class org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor cannot be applied to given types; required: org.apache.hadoop.hive.conf.HiveConf,java.util.LinkedHashMap<org.apache.hadoop.hive.ql.exec.Operator<? extends org.apache.hadoop.hive.ql.plan.OperatorDesc>,org.apache.hadoop.hive.ql.parse.OpParseContext>,org.apache.hadoop.hive.ql.exec.JoinOperator,boolean,java.lang.String[],java.util.List<java.lang.String>,int,boolean found: org.apache.hadoop.hive.conf.HiveConf,java.util.LinkedHashMap<org.apache.hadoop.hive.ql.exec.Operator<? extends org.apache.hadoop.hive.ql.plan.OperatorDesc>,org.apache.hadoop.hive.ql.parse.OpParseContext>,org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.parse.QBJoinTree,int,boolean reason: actual and formal argument lists differ in length [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SparkMapJoinProcessor.java:[50,3] method does not override or implement a method from a supertype [INFO] 12 errors [INFO] ------------------------------------------------------------- [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Hive .............................................. SUCCESS [11.688s] [INFO] Hive Shims Common ................................. SUCCESS [11.960s] [INFO] Hive Shims 0.20S .................................. SUCCESS [2.750s] [INFO] Hive Shims 0.23 ................................... SUCCESS [11.009s] [INFO] Hive Shims Scheduler .............................. SUCCESS [2.007s] [INFO] Hive Shims ........................................ SUCCESS [1.445s] [INFO] Hive Common ....................................... SUCCESS [26.221s] [INFO] Hive Serde ........................................ SUCCESS [18.062s] [INFO] Hive Metastore .................................... SUCCESS [40.433s] [INFO] Hive Ant Utilities ................................ SUCCESS [1.893s] [INFO] Spark Remote Client ............................... SUCCESS [24.064s] [INFO] Hive Query Language ............................... FAILURE [1:05.647s] [INFO] Hive Service ...................................... SKIPPED [INFO] Hive Accumulo Handler ............................. SKIPPED [INFO] Hive JDBC ......................................... SKIPPED [INFO] Hive Beeline ...................................... SKIPPED [INFO] Hive CLI .......................................... SKIPPED [INFO] Hive Contrib ...................................... SKIPPED [INFO] Hive HBase Handler ................................ SKIPPED [INFO] Hive HCatalog ..................................... SKIPPED [INFO] Hive HCatalog Core ................................ SKIPPED [INFO] Hive HCatalog Pig Adapter ......................... SKIPPED [INFO] Hive HCatalog Server Extensions ................... SKIPPED [INFO] Hive HCatalog Webhcat Java Client ................. SKIPPED [INFO] Hive HCatalog Webhcat ............................. SKIPPED [INFO] Hive HCatalog Streaming ........................... SKIPPED [INFO] Hive HWI .......................................... SKIPPED [INFO] Hive ODBC ......................................... SKIPPED [INFO] Hive Shims Aggregator ............................. SKIPPED [INFO] Hive TestUtils .................................... SKIPPED [INFO] Hive Packaging .................................... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3:39.975s [INFO] Finished at: Mon Jan 12 08:54:09 EST 2015 [INFO] Final Memory: 106M/804M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hive-exec: Compilation failure: Compilation failure: [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSortMergeJoinOptimizer.java:[79,12] no suitable method found for canConvertJoinToSMBJoin(org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.ParseContext) [ERROR] method org.apache.hadoop.hive.ql.optimizer.spark.SparkSortMergeJoinOptimizer.canConvertJoinToSMBJoin(org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.ParseContext,java.util.Stack<org.apache.hadoop.hive.ql.lib.Node>) is not applicable [ERROR] (actual and formal argument lists differ in length) [ERROR] method org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.canConvertJoinToSMBJoin(org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx) is not applicable [ERROR] (actual and formal argument lists differ in length) [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSortMergeJoinOptimizer.java:[107,33] method convertJoinToBucketMapJoin in class org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc cannot be applied to given types; [ERROR] required: org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx [ERROR] found: org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] reason: actual and formal argument lists differ in length [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSortMergeJoinOptimizer.java:[109,13] method convertBucketMapJoinToSMBJoin in class org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc cannot be applied to given types; [ERROR] required: org.apache.hadoop.hive.ql.exec.MapJoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx [ERROR] found: org.apache.hadoop.hive.ql.exec.MapJoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] reason: actual and formal argument lists differ in length [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkMapJoinOptimizer.java:[133,39] cannot find symbol [ERROR] symbol: method getJoinContext() [ERROR] location: variable parseContext of type org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkMapJoinOptimizer.java:[377,25] cannot find symbol [ERROR] symbol: method getJoinContext() [ERROR] location: variable parseContext of type org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/BucketMapjoinProc.java:[75,13] method checkConvertBucketMapJoin in class org.apache.hadoop.hive.ql.optimizer.AbstractBucketJoinProc cannot be applied to given types; [ERROR] required: org.apache.hadoop.hive.ql.optimizer.BucketJoinProcCtx,java.util.Map<java.lang.String,org.apache.hadoop.hive.ql.exec.Operator<? extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>,java.util.Map<java.lang.Byte,java.util.List<org.apache.hadoop.hive.ql.plan.ExprNodeDesc>>,java.lang.String,java.util.List<java.lang.String> [ERROR] found: org.apache.hadoop.hive.ql.parse.ParseContext,org.apache.hadoop.hive.ql.optimizer.BucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.QBJoinTree,java.util.Map<java.lang.Byte,java.util.List<org.apache.hadoop.hive.ql.plan.ExprNodeDesc>>,java.lang.String,java.util.List<java.lang.String> [ERROR] reason: actual and formal argument lists differ in length [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSkewJoinProcFactory.java:[143,34] cannot find symbol [ERROR] symbol: method getJoinContext() [ERROR] location: variable parseContext of type org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSkewJoinProcFactory.java:[145,34] cannot find symbol [ERROR] symbol: method getMapJoinContext() [ERROR] location: variable parseContext of type org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSkewJoinProcFactory.java:[147,34] cannot find symbol [ERROR] symbol: method getSmbMapJoinContext() [ERROR] location: variable parseContext of type org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkSMBJoinHintOptimizer.java:[73,7] method convertBucketMapJoinToSMBJoin in class org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc cannot be applied to given types; [ERROR] required: org.apache.hadoop.hive.ql.exec.MapJoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx [ERROR] found: org.apache.hadoop.hive.ql.exec.MapJoinOperator,org.apache.hadoop.hive.ql.optimizer.SortBucketJoinProcCtx,org.apache.hadoop.hive.ql.parse.ParseContext [ERROR] reason: actual and formal argument lists differ in length [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SparkMapJoinProcessor.java:[67,33] method convertJoinOpMapJoinOp in class org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor cannot be applied to given types; [ERROR] required: org.apache.hadoop.hive.conf.HiveConf,java.util.LinkedHashMap<org.apache.hadoop.hive.ql.exec.Operator<? extends org.apache.hadoop.hive.ql.plan.OperatorDesc>,org.apache.hadoop.hive.ql.parse.OpParseContext>,org.apache.hadoop.hive.ql.exec.JoinOperator,boolean,java.lang.String[],java.util.List<java.lang.String>,int,boolean [ERROR] found: org.apache.hadoop.hive.conf.HiveConf,java.util.LinkedHashMap<org.apache.hadoop.hive.ql.exec.Operator<? extends org.apache.hadoop.hive.ql.plan.OperatorDesc>,org.apache.hadoop.hive.ql.parse.OpParseContext>,org.apache.hadoop.hive.ql.exec.JoinOperator,org.apache.hadoop.hive.ql.parse.QBJoinTree,int,boolean [ERROR] reason: actual and formal argument lists differ in length [ERROR] /data/hive-ptest/working/apache-svn-trunk-source/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SparkMapJoinProcessor.java:[50,3] method does not override or implement a method from a supertype [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :hive-exec + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12691630 - PreCommit-HIVE-TRUNK-Build > CBO (Calcite Return Path): Inline Join, Properties > -------------------------------------------------- > > Key: HIVE-9200 > URL: https://issues.apache.org/jira/browse/HIVE-9200 > Project: Hive > Issue Type: Sub-task > Components: CBO > Reporter: Jesus Camacho Rodriguez > Assignee: Jesus Camacho Rodriguez > Fix For: 0.15.0 > > Attachments: HIVE-9200.01.patch, HIVE-9200.02.patch, > HIVE-9200.03.patch, HIVE-9200.04.patch, HIVE-9200.05.patch, HIVE-9200.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)