[ 
https://issues.apache.org/jira/browse/HIVE-25166?focusedWorklogId=604331&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-604331
 ]

ASF GitHub Bot logged work on HIVE-25166:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 01/Jun/21 06:29
            Start Date: 01/Jun/21 06:29
    Worklog Time Spent: 10m 
      Work Description: kasakrisz commented on pull request #2325:
URL: https://github.com/apache/hive/pull/2325#issuecomment-851857576


   This is not the right approach now because it adds and extra column to 
shuffle which is not always necessary. 
   Closing this. See #2334 for fix the original issue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 604331)
    Time Spent: 40m  (was: 0.5h)

> Query with multiple count(distinct constant) fails
> --------------------------------------------------
>
>                 Key: HIVE-25166
>                 URL: https://issues.apache.org/jira/browse/HIVE-25166
>             Project: Hive
>          Issue Type: Bug
>          Components: CBO
>            Reporter: Krisztian Kasa
>            Assignee: Krisztian Kasa
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code}
> select count(distinct 0), count(distinct null) from alltypes;
> {code}
> {code}
> org.apache.hadoop.hive.ql.parse.SemanticException: Line 0:-1 Expression not 
> in GROUP BY key 'TOK_NULL'
>       at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:12941)
>       at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:12883)
>       at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:4695)
>       at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:4483)
>       at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:10960)
>       at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:10902)
>       at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11808)
>       at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11665)
>       at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11692)
>       at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11678)
>       at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:618)
>       at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12505)
>       at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:449)
>       at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:316)
>       at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:175)
>       at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:316)
>       at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:223)
>       at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:105)
>       at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:492)
>       at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:445)
>       at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:409)
>       at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:403)
>       at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:125)
>       at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:229)
>       at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:256)
>       at org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:201)
>       at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:127)
>       at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
>       at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:353)
>       at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:744)
>       at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:714)
>       at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:170)
>       at 
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157)
>       at 
> org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:62)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>       at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>       at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>       at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>       at 
> org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:135)
>       at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>       at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>       at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>       at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>       at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>       at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>       at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>       at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>       at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>       at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>       at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>       at org.junit.runners.Suite.runChild(Suite.java:128)
>       at org.junit.runners.Suite.runChild(Suite.java:27)
>       at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>       at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>       at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>       at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>       at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>       at 
> org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:95)
>       at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>       at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>       at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>       at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>       at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>       at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>       at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>       at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
>       at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
>       at 
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
>       at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> {code}
> CBO Plan:
> {code}
> HiveAggregate(group=[{}], agg#0=[count($0)], agg#1=[count($1)])
>   HiveProject($f0=[CASE(=($2, 1), 1, null:INTEGER)], $f1=[null:INTEGER])
>     HiveAggregate(group=[{0, 1}], GROUPING__ID=[GROUPING__ID()])
>       HiveProject($f0=[true], $f1=[true])
>         HiveTableScan(table=[[default, alltypes]], table:alias=[alltypes])
> {code}
> Query is rewritten to use an aggregate with one grouping set but later at 
> ASTConverter this aggregate is treated as a simple Group By without grouping 
> sets. However the function GROUPING__ID can only be used when grouping sets 
> are defined.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to