[ 
https://issues.apache.org/jira/browse/HIVE-24084?focusedWorklogId=483143&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483143
 ]

ASF GitHub Bot logged work on HIVE-24084:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 12/Sep/20 21:20
            Start Date: 12/Sep/20 21:20
    Worklog Time Spent: 10m 
      Work Description: kgyrtkirk commented on a change in pull request #1439:
URL: https://github.com/apache/hive/pull/1439#discussion_r487096103



##########
File path: ql/src/test/queries/clientpositive/tpch18.q
##########
@@ -0,0 +1,133 @@
+--! qt:dataset:tpch_0_001.customer
+--! qt:dataset:tpch_0_001.lineitem
+--! qt:dataset:tpch_0_001.nation
+--! qt:dataset:tpch_0_001.orders
+--! qt:dataset:tpch_0_001.part
+--! qt:dataset:tpch_0_001.partsupp
+--! qt:dataset:tpch_0_001.region
+--! qt:dataset:tpch_0_001.supplier
+
+
+use tpch_0_001;
+
+set hive.transpose.aggr.join=true;
+set hive.transpose.aggr.join.unique=true;
+set hive.mapred.mode=nonstrict;
+
+create view q18_tmp_cached as
+select
+       l_orderkey,
+       sum(l_quantity) as t_sum_quantity
+from
+       lineitem
+where
+       l_orderkey is not null
+group by
+       l_orderkey;
+
+
+
+explain cbo select
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice,
+sum(l_quantity)
+from
+       customer,
+       orders,
+       q18_tmp_cached t,
+       lineitem l
+where
+c_custkey = o_custkey
+and o_orderkey = t.l_orderkey
+and o_orderkey is not null
+and t.t_sum_quantity > 300
+and o_orderkey = l.l_orderkey
+and l.l_orderkey is not null
+group by
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice
+order by
+o_totalprice desc,
+o_orderdate
+limit 100;
+
+
+
+select 'add constraints';
+
+alter table orders add constraint pk_o primary key (o_orderkey) disable 
novalidate rely;
+alter table customer add constraint pk_c primary key (c_custkey) disable 
novalidate rely;
+

Review comment:
       I've added both constraints - it only removed the IS NOT NULL filter
   it seems to me that 1 of the sum() is used as an output and the other is 
being used to filter by >300 - so both of them is being "used"

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
##########
@@ -303,6 +305,90 @@ public void onMatch(RelOptRuleCall call) {
     }
   }
 
+  /**
+   * Determines weather the give grouping is unique.
+   *
+   * Consider a join which might produce non-unique rows; but later the 
results are aggregated again.
+   * This method determines if there are sufficient columns in the grouping 
which have been present previously as unique column(s).
+   */
+  private boolean isGroupingUnique(RelNode input, ImmutableBitSet groups) {
+    if (groups.isEmpty()) {
+      return false;
+    }
+    RelMetadataQuery mq = input.getCluster().getMetadataQuery();
+    Set<ImmutableBitSet> uKeys = mq.getUniqueKeys(input);
+    for (ImmutableBitSet u : uKeys) {
+      if (groups.contains(u)) {
+        return true;
+      }
+    }
+    if (input instanceof Join) {
+      Join join = (Join) input;
+      RexBuilder rexBuilder = input.getCluster().getRexBuilder();
+      SimpleConditionInfo cond = new SimpleConditionInfo(join.getCondition(), 
rexBuilder);
+
+      if (cond.valid) {
+        ImmutableBitSet newGroup = 
groups.intersect(ImmutableBitSet.fromBitSet(cond.fields));
+        RelNode l = join.getLeft();
+        RelNode r = join.getRight();
+
+        int joinFieldCount = join.getRowType().getFieldCount();
+        int lFieldCount = l.getRowType().getFieldCount();
+
+        ImmutableBitSet groupL = newGroup.get(0, lFieldCount);
+        ImmutableBitSet groupR = newGroup.get(lFieldCount, 
joinFieldCount).shift(-lFieldCount);
+
+        if (isGroupingUnique(l, groupL)) {

Review comment:
       That could be done; and I'm sure it was true in this - but this logic 
will work better if it could walk down as many joins as it could - we might 
have an aggregate on top in the meantime a bunch of joins under it...so I feel 
that it will be beneficial to retain it.
   I feeled tempted to write a RelMd handler - however I don't think I could 
just introduce a new one easily.
   RelShuttle doesn't look like a good match - I'll leave it as a set of 
`instanceof` calls for now.
   
   I'll upload a new patch to see if digging deeper in the tree could do more 
or not.

##########
File path: ql/src/test/queries/clientpositive/tpch18.q
##########
@@ -0,0 +1,133 @@
+--! qt:dataset:tpch_0_001.customer
+--! qt:dataset:tpch_0_001.lineitem
+--! qt:dataset:tpch_0_001.nation
+--! qt:dataset:tpch_0_001.orders
+--! qt:dataset:tpch_0_001.part
+--! qt:dataset:tpch_0_001.partsupp
+--! qt:dataset:tpch_0_001.region
+--! qt:dataset:tpch_0_001.supplier
+
+
+use tpch_0_001;
+
+set hive.transpose.aggr.join=true;
+set hive.transpose.aggr.join.unique=true;
+set hive.mapred.mode=nonstrict;
+
+create view q18_tmp_cached as
+select
+       l_orderkey,
+       sum(l_quantity) as t_sum_quantity
+from
+       lineitem
+where
+       l_orderkey is not null
+group by
+       l_orderkey;
+
+
+
+explain cbo select
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice,
+sum(l_quantity)
+from
+       customer,
+       orders,
+       q18_tmp_cached t,
+       lineitem l
+where
+c_custkey = o_custkey
+and o_orderkey = t.l_orderkey
+and o_orderkey is not null
+and t.t_sum_quantity > 300
+and o_orderkey = l.l_orderkey
+and l.l_orderkey is not null
+group by
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice
+order by
+o_totalprice desc,
+o_orderdate
+limit 100;
+
+
+
+select 'add constraints';
+
+alter table orders add constraint pk_o primary key (o_orderkey) disable 
novalidate rely;
+alter table customer add constraint pk_c primary key (c_custkey) disable 
novalidate rely;
+

Review comment:
       I've added both constraints - it only removed the IS NOT NULL filter
   it seems to me that 1 of the sum() is used as an output and the other is 
being used to filter by >300 - so both of them is being "used"

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
##########
@@ -303,6 +305,90 @@ public void onMatch(RelOptRuleCall call) {
     }
   }
 
+  /**
+   * Determines weather the give grouping is unique.
+   *
+   * Consider a join which might produce non-unique rows; but later the 
results are aggregated again.
+   * This method determines if there are sufficient columns in the grouping 
which have been present previously as unique column(s).
+   */
+  private boolean isGroupingUnique(RelNode input, ImmutableBitSet groups) {
+    if (groups.isEmpty()) {
+      return false;
+    }
+    RelMetadataQuery mq = input.getCluster().getMetadataQuery();
+    Set<ImmutableBitSet> uKeys = mq.getUniqueKeys(input);
+    for (ImmutableBitSet u : uKeys) {
+      if (groups.contains(u)) {
+        return true;
+      }
+    }
+    if (input instanceof Join) {
+      Join join = (Join) input;
+      RexBuilder rexBuilder = input.getCluster().getRexBuilder();
+      SimpleConditionInfo cond = new SimpleConditionInfo(join.getCondition(), 
rexBuilder);
+
+      if (cond.valid) {
+        ImmutableBitSet newGroup = 
groups.intersect(ImmutableBitSet.fromBitSet(cond.fields));
+        RelNode l = join.getLeft();
+        RelNode r = join.getRight();
+
+        int joinFieldCount = join.getRowType().getFieldCount();
+        int lFieldCount = l.getRowType().getFieldCount();
+
+        ImmutableBitSet groupL = newGroup.get(0, lFieldCount);
+        ImmutableBitSet groupR = newGroup.get(lFieldCount, 
joinFieldCount).shift(-lFieldCount);
+
+        if (isGroupingUnique(l, groupL)) {

Review comment:
       That could be done; and I'm sure it was true in this - but this logic 
will work better if it could walk down as many joins as it could - we might 
have an aggregate on top in the meantime a bunch of joins under it...so I feel 
that it will be beneficial to retain it.
   I feeled tempted to write a RelMd handler - however I don't think I could 
just introduce a new one easily.
   RelShuttle doesn't look like a good match - I'll leave it as a set of 
`instanceof` calls for now.
   
   I'll upload a new patch to see if digging deeper in the tree could do more 
or not.

##########
File path: ql/src/test/queries/clientpositive/tpch18.q
##########
@@ -0,0 +1,133 @@
+--! qt:dataset:tpch_0_001.customer
+--! qt:dataset:tpch_0_001.lineitem
+--! qt:dataset:tpch_0_001.nation
+--! qt:dataset:tpch_0_001.orders
+--! qt:dataset:tpch_0_001.part
+--! qt:dataset:tpch_0_001.partsupp
+--! qt:dataset:tpch_0_001.region
+--! qt:dataset:tpch_0_001.supplier
+
+
+use tpch_0_001;
+
+set hive.transpose.aggr.join=true;
+set hive.transpose.aggr.join.unique=true;
+set hive.mapred.mode=nonstrict;
+
+create view q18_tmp_cached as
+select
+       l_orderkey,
+       sum(l_quantity) as t_sum_quantity
+from
+       lineitem
+where
+       l_orderkey is not null
+group by
+       l_orderkey;
+
+
+
+explain cbo select
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice,
+sum(l_quantity)
+from
+       customer,
+       orders,
+       q18_tmp_cached t,
+       lineitem l
+where
+c_custkey = o_custkey
+and o_orderkey = t.l_orderkey
+and o_orderkey is not null
+and t.t_sum_quantity > 300
+and o_orderkey = l.l_orderkey
+and l.l_orderkey is not null
+group by
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice
+order by
+o_totalprice desc,
+o_orderdate
+limit 100;
+
+
+
+select 'add constraints';
+
+alter table orders add constraint pk_o primary key (o_orderkey) disable 
novalidate rely;
+alter table customer add constraint pk_c primary key (c_custkey) disable 
novalidate rely;
+

Review comment:
       I've added both constraints - it only removed the IS NOT NULL filter
   it seems to me that 1 of the sum() is used as an output and the other is 
being used to filter by >300 - so both of them is being "used"

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
##########
@@ -303,6 +305,90 @@ public void onMatch(RelOptRuleCall call) {
     }
   }
 
+  /**
+   * Determines weather the give grouping is unique.
+   *
+   * Consider a join which might produce non-unique rows; but later the 
results are aggregated again.
+   * This method determines if there are sufficient columns in the grouping 
which have been present previously as unique column(s).
+   */
+  private boolean isGroupingUnique(RelNode input, ImmutableBitSet groups) {
+    if (groups.isEmpty()) {
+      return false;
+    }
+    RelMetadataQuery mq = input.getCluster().getMetadataQuery();
+    Set<ImmutableBitSet> uKeys = mq.getUniqueKeys(input);
+    for (ImmutableBitSet u : uKeys) {
+      if (groups.contains(u)) {
+        return true;
+      }
+    }
+    if (input instanceof Join) {
+      Join join = (Join) input;
+      RexBuilder rexBuilder = input.getCluster().getRexBuilder();
+      SimpleConditionInfo cond = new SimpleConditionInfo(join.getCondition(), 
rexBuilder);
+
+      if (cond.valid) {
+        ImmutableBitSet newGroup = 
groups.intersect(ImmutableBitSet.fromBitSet(cond.fields));
+        RelNode l = join.getLeft();
+        RelNode r = join.getRight();
+
+        int joinFieldCount = join.getRowType().getFieldCount();
+        int lFieldCount = l.getRowType().getFieldCount();
+
+        ImmutableBitSet groupL = newGroup.get(0, lFieldCount);
+        ImmutableBitSet groupR = newGroup.get(lFieldCount, 
joinFieldCount).shift(-lFieldCount);
+
+        if (isGroupingUnique(l, groupL)) {

Review comment:
       That could be done; and I'm sure it was true in this - but this logic 
will work better if it could walk down as many joins as it could - we might 
have an aggregate on top in the meantime a bunch of joins under it...so I feel 
that it will be beneficial to retain it.
   I feeled tempted to write a RelMd handler - however I don't think I could 
just introduce a new one easily.
   RelShuttle doesn't look like a good match - I'll leave it as a set of 
`instanceof` calls for now.
   
   I'll upload a new patch to see if digging deeper in the tree could do more 
or not.

##########
File path: ql/src/test/queries/clientpositive/tpch18.q
##########
@@ -0,0 +1,133 @@
+--! qt:dataset:tpch_0_001.customer
+--! qt:dataset:tpch_0_001.lineitem
+--! qt:dataset:tpch_0_001.nation
+--! qt:dataset:tpch_0_001.orders
+--! qt:dataset:tpch_0_001.part
+--! qt:dataset:tpch_0_001.partsupp
+--! qt:dataset:tpch_0_001.region
+--! qt:dataset:tpch_0_001.supplier
+
+
+use tpch_0_001;
+
+set hive.transpose.aggr.join=true;
+set hive.transpose.aggr.join.unique=true;
+set hive.mapred.mode=nonstrict;
+
+create view q18_tmp_cached as
+select
+       l_orderkey,
+       sum(l_quantity) as t_sum_quantity
+from
+       lineitem
+where
+       l_orderkey is not null
+group by
+       l_orderkey;
+
+
+
+explain cbo select
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice,
+sum(l_quantity)
+from
+       customer,
+       orders,
+       q18_tmp_cached t,
+       lineitem l
+where
+c_custkey = o_custkey
+and o_orderkey = t.l_orderkey
+and o_orderkey is not null
+and t.t_sum_quantity > 300
+and o_orderkey = l.l_orderkey
+and l.l_orderkey is not null
+group by
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice
+order by
+o_totalprice desc,
+o_orderdate
+limit 100;
+
+
+
+select 'add constraints';
+
+alter table orders add constraint pk_o primary key (o_orderkey) disable 
novalidate rely;
+alter table customer add constraint pk_c primary key (c_custkey) disable 
novalidate rely;
+

Review comment:
       I've added both constraints - it only removed the IS NOT NULL filter
   it seems to me that 1 of the sum() is used as an output and the other is 
being used to filter by >300 - so both of them is being "used"

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
##########
@@ -303,6 +305,90 @@ public void onMatch(RelOptRuleCall call) {
     }
   }
 
+  /**
+   * Determines weather the give grouping is unique.
+   *
+   * Consider a join which might produce non-unique rows; but later the 
results are aggregated again.
+   * This method determines if there are sufficient columns in the grouping 
which have been present previously as unique column(s).
+   */
+  private boolean isGroupingUnique(RelNode input, ImmutableBitSet groups) {
+    if (groups.isEmpty()) {
+      return false;
+    }
+    RelMetadataQuery mq = input.getCluster().getMetadataQuery();
+    Set<ImmutableBitSet> uKeys = mq.getUniqueKeys(input);
+    for (ImmutableBitSet u : uKeys) {
+      if (groups.contains(u)) {
+        return true;
+      }
+    }
+    if (input instanceof Join) {
+      Join join = (Join) input;
+      RexBuilder rexBuilder = input.getCluster().getRexBuilder();
+      SimpleConditionInfo cond = new SimpleConditionInfo(join.getCondition(), 
rexBuilder);
+
+      if (cond.valid) {
+        ImmutableBitSet newGroup = 
groups.intersect(ImmutableBitSet.fromBitSet(cond.fields));
+        RelNode l = join.getLeft();
+        RelNode r = join.getRight();
+
+        int joinFieldCount = join.getRowType().getFieldCount();
+        int lFieldCount = l.getRowType().getFieldCount();
+
+        ImmutableBitSet groupL = newGroup.get(0, lFieldCount);
+        ImmutableBitSet groupR = newGroup.get(lFieldCount, 
joinFieldCount).shift(-lFieldCount);
+
+        if (isGroupingUnique(l, groupL)) {

Review comment:
       That could be done; and I'm sure it was true in this - but this logic 
will work better if it could walk down as many joins as it could - we might 
have an aggregate on top in the meantime a bunch of joins under it...so I feel 
that it will be beneficial to retain it.
   I feeled tempted to write a RelMd handler - however I don't think I could 
just introduce a new one easily.
   RelShuttle doesn't look like a good match - I'll leave it as a set of 
`instanceof` calls for now.
   
   I'll upload a new patch to see if digging deeper in the tree could do more 
or not.

##########
File path: ql/src/test/queries/clientpositive/tpch18.q
##########
@@ -0,0 +1,133 @@
+--! qt:dataset:tpch_0_001.customer
+--! qt:dataset:tpch_0_001.lineitem
+--! qt:dataset:tpch_0_001.nation
+--! qt:dataset:tpch_0_001.orders
+--! qt:dataset:tpch_0_001.part
+--! qt:dataset:tpch_0_001.partsupp
+--! qt:dataset:tpch_0_001.region
+--! qt:dataset:tpch_0_001.supplier
+
+
+use tpch_0_001;
+
+set hive.transpose.aggr.join=true;
+set hive.transpose.aggr.join.unique=true;
+set hive.mapred.mode=nonstrict;
+
+create view q18_tmp_cached as
+select
+       l_orderkey,
+       sum(l_quantity) as t_sum_quantity
+from
+       lineitem
+where
+       l_orderkey is not null
+group by
+       l_orderkey;
+
+
+
+explain cbo select
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice,
+sum(l_quantity)
+from
+       customer,
+       orders,
+       q18_tmp_cached t,
+       lineitem l
+where
+c_custkey = o_custkey
+and o_orderkey = t.l_orderkey
+and o_orderkey is not null
+and t.t_sum_quantity > 300
+and o_orderkey = l.l_orderkey
+and l.l_orderkey is not null
+group by
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice
+order by
+o_totalprice desc,
+o_orderdate
+limit 100;
+
+
+
+select 'add constraints';
+
+alter table orders add constraint pk_o primary key (o_orderkey) disable 
novalidate rely;
+alter table customer add constraint pk_c primary key (c_custkey) disable 
novalidate rely;
+

Review comment:
       I've added both constraints - it only removed the IS NOT NULL filter
   it seems to me that 1 of the sum() is used as an output and the other is 
being used to filter by >300 - so both of them is being "used"

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
##########
@@ -303,6 +305,90 @@ public void onMatch(RelOptRuleCall call) {
     }
   }
 
+  /**
+   * Determines weather the give grouping is unique.
+   *
+   * Consider a join which might produce non-unique rows; but later the 
results are aggregated again.
+   * This method determines if there are sufficient columns in the grouping 
which have been present previously as unique column(s).
+   */
+  private boolean isGroupingUnique(RelNode input, ImmutableBitSet groups) {
+    if (groups.isEmpty()) {
+      return false;
+    }
+    RelMetadataQuery mq = input.getCluster().getMetadataQuery();
+    Set<ImmutableBitSet> uKeys = mq.getUniqueKeys(input);
+    for (ImmutableBitSet u : uKeys) {
+      if (groups.contains(u)) {
+        return true;
+      }
+    }
+    if (input instanceof Join) {
+      Join join = (Join) input;
+      RexBuilder rexBuilder = input.getCluster().getRexBuilder();
+      SimpleConditionInfo cond = new SimpleConditionInfo(join.getCondition(), 
rexBuilder);
+
+      if (cond.valid) {
+        ImmutableBitSet newGroup = 
groups.intersect(ImmutableBitSet.fromBitSet(cond.fields));
+        RelNode l = join.getLeft();
+        RelNode r = join.getRight();
+
+        int joinFieldCount = join.getRowType().getFieldCount();
+        int lFieldCount = l.getRowType().getFieldCount();
+
+        ImmutableBitSet groupL = newGroup.get(0, lFieldCount);
+        ImmutableBitSet groupR = newGroup.get(lFieldCount, 
joinFieldCount).shift(-lFieldCount);
+
+        if (isGroupingUnique(l, groupL)) {

Review comment:
       That could be done; and I'm sure it was true in this - but this logic 
will work better if it could walk down as many joins as it could - we might 
have an aggregate on top in the meantime a bunch of joins under it...so I feel 
that it will be beneficial to retain it.
   I feeled tempted to write a RelMd handler - however I don't think I could 
just introduce a new one easily.
   RelShuttle doesn't look like a good match - I'll leave it as a set of 
`instanceof` calls for now.
   
   I'll upload a new patch to see if digging deeper in the tree could do more 
or not.

##########
File path: ql/src/test/queries/clientpositive/tpch18.q
##########
@@ -0,0 +1,133 @@
+--! qt:dataset:tpch_0_001.customer
+--! qt:dataset:tpch_0_001.lineitem
+--! qt:dataset:tpch_0_001.nation
+--! qt:dataset:tpch_0_001.orders
+--! qt:dataset:tpch_0_001.part
+--! qt:dataset:tpch_0_001.partsupp
+--! qt:dataset:tpch_0_001.region
+--! qt:dataset:tpch_0_001.supplier
+
+
+use tpch_0_001;
+
+set hive.transpose.aggr.join=true;
+set hive.transpose.aggr.join.unique=true;
+set hive.mapred.mode=nonstrict;
+
+create view q18_tmp_cached as
+select
+       l_orderkey,
+       sum(l_quantity) as t_sum_quantity
+from
+       lineitem
+where
+       l_orderkey is not null
+group by
+       l_orderkey;
+
+
+
+explain cbo select
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice,
+sum(l_quantity)
+from
+       customer,
+       orders,
+       q18_tmp_cached t,
+       lineitem l
+where
+c_custkey = o_custkey
+and o_orderkey = t.l_orderkey
+and o_orderkey is not null
+and t.t_sum_quantity > 300
+and o_orderkey = l.l_orderkey
+and l.l_orderkey is not null
+group by
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice
+order by
+o_totalprice desc,
+o_orderdate
+limit 100;
+
+
+
+select 'add constraints';
+
+alter table orders add constraint pk_o primary key (o_orderkey) disable 
novalidate rely;
+alter table customer add constraint pk_c primary key (c_custkey) disable 
novalidate rely;
+

Review comment:
       I've added both constraints - it only removed the IS NOT NULL filter
   it seems to me that 1 of the sum() is used as an output and the other is 
being used to filter by >300 - so both of them is being "used"

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
##########
@@ -303,6 +305,90 @@ public void onMatch(RelOptRuleCall call) {
     }
   }
 
+  /**
+   * Determines weather the give grouping is unique.
+   *
+   * Consider a join which might produce non-unique rows; but later the 
results are aggregated again.
+   * This method determines if there are sufficient columns in the grouping 
which have been present previously as unique column(s).
+   */
+  private boolean isGroupingUnique(RelNode input, ImmutableBitSet groups) {
+    if (groups.isEmpty()) {
+      return false;
+    }
+    RelMetadataQuery mq = input.getCluster().getMetadataQuery();
+    Set<ImmutableBitSet> uKeys = mq.getUniqueKeys(input);
+    for (ImmutableBitSet u : uKeys) {
+      if (groups.contains(u)) {
+        return true;
+      }
+    }
+    if (input instanceof Join) {
+      Join join = (Join) input;
+      RexBuilder rexBuilder = input.getCluster().getRexBuilder();
+      SimpleConditionInfo cond = new SimpleConditionInfo(join.getCondition(), 
rexBuilder);
+
+      if (cond.valid) {
+        ImmutableBitSet newGroup = 
groups.intersect(ImmutableBitSet.fromBitSet(cond.fields));
+        RelNode l = join.getLeft();
+        RelNode r = join.getRight();
+
+        int joinFieldCount = join.getRowType().getFieldCount();
+        int lFieldCount = l.getRowType().getFieldCount();
+
+        ImmutableBitSet groupL = newGroup.get(0, lFieldCount);
+        ImmutableBitSet groupR = newGroup.get(lFieldCount, 
joinFieldCount).shift(-lFieldCount);
+
+        if (isGroupingUnique(l, groupL)) {

Review comment:
       That could be done; and I'm sure it was true in this - but this logic 
will work better if it could walk down as many joins as it could - we might 
have an aggregate on top in the meantime a bunch of joins under it...so I feel 
that it will be beneficial to retain it.
   I feeled tempted to write a RelMd handler - however I don't think I could 
just introduce a new one easily.
   RelShuttle doesn't look like a good match - I'll leave it as a set of 
`instanceof` calls for now.
   
   I'll upload a new patch to see if digging deeper in the tree could do more 
or not.

##########
File path: ql/src/test/queries/clientpositive/tpch18.q
##########
@@ -0,0 +1,133 @@
+--! qt:dataset:tpch_0_001.customer
+--! qt:dataset:tpch_0_001.lineitem
+--! qt:dataset:tpch_0_001.nation
+--! qt:dataset:tpch_0_001.orders
+--! qt:dataset:tpch_0_001.part
+--! qt:dataset:tpch_0_001.partsupp
+--! qt:dataset:tpch_0_001.region
+--! qt:dataset:tpch_0_001.supplier
+
+
+use tpch_0_001;
+
+set hive.transpose.aggr.join=true;
+set hive.transpose.aggr.join.unique=true;
+set hive.mapred.mode=nonstrict;
+
+create view q18_tmp_cached as
+select
+       l_orderkey,
+       sum(l_quantity) as t_sum_quantity
+from
+       lineitem
+where
+       l_orderkey is not null
+group by
+       l_orderkey;
+
+
+
+explain cbo select
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice,
+sum(l_quantity)
+from
+       customer,
+       orders,
+       q18_tmp_cached t,
+       lineitem l
+where
+c_custkey = o_custkey
+and o_orderkey = t.l_orderkey
+and o_orderkey is not null
+and t.t_sum_quantity > 300
+and o_orderkey = l.l_orderkey
+and l.l_orderkey is not null
+group by
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice
+order by
+o_totalprice desc,
+o_orderdate
+limit 100;
+
+
+
+select 'add constraints';
+
+alter table orders add constraint pk_o primary key (o_orderkey) disable 
novalidate rely;
+alter table customer add constraint pk_c primary key (c_custkey) disable 
novalidate rely;
+

Review comment:
       I've added both constraints - it only removed the IS NOT NULL filter
   it seems to me that 1 of the sum() is used as an output and the other is 
being used to filter by >300 - so both of them is being "used"

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
##########
@@ -303,6 +305,90 @@ public void onMatch(RelOptRuleCall call) {
     }
   }
 
+  /**
+   * Determines weather the give grouping is unique.
+   *
+   * Consider a join which might produce non-unique rows; but later the 
results are aggregated again.
+   * This method determines if there are sufficient columns in the grouping 
which have been present previously as unique column(s).
+   */
+  private boolean isGroupingUnique(RelNode input, ImmutableBitSet groups) {
+    if (groups.isEmpty()) {
+      return false;
+    }
+    RelMetadataQuery mq = input.getCluster().getMetadataQuery();
+    Set<ImmutableBitSet> uKeys = mq.getUniqueKeys(input);
+    for (ImmutableBitSet u : uKeys) {
+      if (groups.contains(u)) {
+        return true;
+      }
+    }
+    if (input instanceof Join) {
+      Join join = (Join) input;
+      RexBuilder rexBuilder = input.getCluster().getRexBuilder();
+      SimpleConditionInfo cond = new SimpleConditionInfo(join.getCondition(), 
rexBuilder);
+
+      if (cond.valid) {
+        ImmutableBitSet newGroup = 
groups.intersect(ImmutableBitSet.fromBitSet(cond.fields));
+        RelNode l = join.getLeft();
+        RelNode r = join.getRight();
+
+        int joinFieldCount = join.getRowType().getFieldCount();
+        int lFieldCount = l.getRowType().getFieldCount();
+
+        ImmutableBitSet groupL = newGroup.get(0, lFieldCount);
+        ImmutableBitSet groupR = newGroup.get(lFieldCount, 
joinFieldCount).shift(-lFieldCount);
+
+        if (isGroupingUnique(l, groupL)) {

Review comment:
       That could be done; and I'm sure it was true in this - but this logic 
will work better if it could walk down as many joins as it could - we might 
have an aggregate on top in the meantime a bunch of joins under it...so I feel 
that it will be beneficial to retain it.
   I feeled tempted to write a RelMd handler - however I don't think I could 
just introduce a new one easily.
   RelShuttle doesn't look like a good match - I'll leave it as a set of 
`instanceof` calls for now.
   
   I'll upload a new patch to see if digging deeper in the tree could do more 
or not.

##########
File path: ql/src/test/queries/clientpositive/tpch18.q
##########
@@ -0,0 +1,133 @@
+--! qt:dataset:tpch_0_001.customer
+--! qt:dataset:tpch_0_001.lineitem
+--! qt:dataset:tpch_0_001.nation
+--! qt:dataset:tpch_0_001.orders
+--! qt:dataset:tpch_0_001.part
+--! qt:dataset:tpch_0_001.partsupp
+--! qt:dataset:tpch_0_001.region
+--! qt:dataset:tpch_0_001.supplier
+
+
+use tpch_0_001;
+
+set hive.transpose.aggr.join=true;
+set hive.transpose.aggr.join.unique=true;
+set hive.mapred.mode=nonstrict;
+
+create view q18_tmp_cached as
+select
+       l_orderkey,
+       sum(l_quantity) as t_sum_quantity
+from
+       lineitem
+where
+       l_orderkey is not null
+group by
+       l_orderkey;
+
+
+
+explain cbo select
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice,
+sum(l_quantity)
+from
+       customer,
+       orders,
+       q18_tmp_cached t,
+       lineitem l
+where
+c_custkey = o_custkey
+and o_orderkey = t.l_orderkey
+and o_orderkey is not null
+and t.t_sum_quantity > 300
+and o_orderkey = l.l_orderkey
+and l.l_orderkey is not null
+group by
+c_name,
+c_custkey,
+o_orderkey,
+o_orderdate,
+o_totalprice
+order by
+o_totalprice desc,
+o_orderdate
+limit 100;
+
+
+
+select 'add constraints';
+
+alter table orders add constraint pk_o primary key (o_orderkey) disable 
novalidate rely;
+alter table customer add constraint pk_c primary key (c_custkey) disable 
novalidate rely;
+

Review comment:
       I've added both constraints - it only removed the IS NOT NULL filter
   it seems to me that 1 of the sum() is used as an output and the other is 
being used to filter by >300 - so both of them is being "used"

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
##########
@@ -303,6 +305,90 @@ public void onMatch(RelOptRuleCall call) {
     }
   }
 
+  /**
+   * Determines weather the give grouping is unique.
+   *
+   * Consider a join which might produce non-unique rows; but later the 
results are aggregated again.
+   * This method determines if there are sufficient columns in the grouping 
which have been present previously as unique column(s).
+   */
+  private boolean isGroupingUnique(RelNode input, ImmutableBitSet groups) {
+    if (groups.isEmpty()) {
+      return false;
+    }
+    RelMetadataQuery mq = input.getCluster().getMetadataQuery();
+    Set<ImmutableBitSet> uKeys = mq.getUniqueKeys(input);
+    for (ImmutableBitSet u : uKeys) {
+      if (groups.contains(u)) {
+        return true;
+      }
+    }
+    if (input instanceof Join) {
+      Join join = (Join) input;
+      RexBuilder rexBuilder = input.getCluster().getRexBuilder();
+      SimpleConditionInfo cond = new SimpleConditionInfo(join.getCondition(), 
rexBuilder);
+
+      if (cond.valid) {
+        ImmutableBitSet newGroup = 
groups.intersect(ImmutableBitSet.fromBitSet(cond.fields));
+        RelNode l = join.getLeft();
+        RelNode r = join.getRight();
+
+        int joinFieldCount = join.getRowType().getFieldCount();
+        int lFieldCount = l.getRowType().getFieldCount();
+
+        ImmutableBitSet groupL = newGroup.get(0, lFieldCount);
+        ImmutableBitSet groupR = newGroup.get(lFieldCount, 
joinFieldCount).shift(-lFieldCount);
+
+        if (isGroupingUnique(l, groupL)) {

Review comment:
       That could be done; and I'm sure it was true in this - but this logic 
will work better if it could walk down as many joins as it could - we might 
have an aggregate on top in the meantime a bunch of joins under it...so I feel 
that it will be beneficial to retain it.
   I feeled tempted to write a RelMd handler - however I don't think I could 
just introduce a new one easily.
   RelShuttle doesn't look like a good match - I'll leave it as a set of 
`instanceof` calls for now.
   
   I'll upload a new patch to see if digging deeper in the tree could do more 
or not.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 483143)
    Time Spent: 3.5h  (was: 3h 20m)

> Push Aggregates thru joins in case it re-groups previously unique columns
> -------------------------------------------------------------------------
>
>                 Key: HIVE-24084
>                 URL: https://issues.apache.org/jira/browse/HIVE-24084
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Zoltan Haindrich
>            Assignee: Zoltan Haindrich
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 3.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to