[ 
https://issues.apache.org/jira/browse/HIVE-22369?focusedWorklogId=346092&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346092
 ]

ASF GitHub Bot logged work on HIVE-22369:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 19/Nov/19 17:02
            Start Date: 19/Nov/19 17:02
    Worklog Time Spent: 10m 
      Work Description: miklosgergely commented on pull request #845: 
HIVE-22369 Handle HiveTableFunctionScan at return path
URL: https://github.com/apache/hive/pull/845#discussion_r348049562
 
 

 ##########
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/HiveOpConverter.java
 ##########
 @@ -186,12 +193,67 @@ OpAttr dispatch(RelNode rn) throws SemanticException {
       return visit((HiveSortExchange) rn);
     } else if (rn instanceof HiveAggregate) {
       return visit((HiveAggregate) rn);
+    } else if (rn instanceof HiveTableFunctionScan) {
+      return visit((HiveTableFunctionScan) rn);
     }
     LOG.error(rn.getClass().getCanonicalName() + "operator translation not 
supported"
         + " yet in return path.");
     return null;
   }
 
+  private OpAttr visit(HiveTableFunctionScan scanRel) throws SemanticException 
{
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Translating operator rel#" + scanRel.getId() + ":"
+          + scanRel.getRelTypeName() + " with row type: [" + 
scanRel.getRowType() + "]");
+    }
+
+    RexCall call = (RexCall)scanRel.getCall();
+
+    String functionName = call.getOperator().getName();
+    FunctionInfo fi = FunctionRegistry.getFunctionInfo(functionName);
+    GenericUDTF genericUDTF = fi.getGenericUDTF();
+
+    RowResolver rowResolver = new RowResolver();
+    List<String> fieldNames = new 
ArrayList<>(scanRel.getRowType().getFieldNames());
+    List<String> exprNames = new ArrayList<>(fieldNames);
+    List<ExprNodeDesc> exprCols = new ArrayList<>();
+    Map<String, ExprNodeDesc> colExprMap = new HashMap<>();
+    for (int pos = 0; pos < call.getOperands().size(); pos++) {
+      ExprNodeConverter converter = new 
ExprNodeConverter(SemanticAnalyzer.DUMMY_TABLE, fieldNames.get(pos),
+          scanRel.getRowType(), scanRel.getRowType(), 
((HiveTableScan)scanRel.getInput(0)).getPartOrVirtualCols(),
+          scanRel.getCluster().getTypeFactory(), true);
+      ExprNodeDesc exprCol = call.getOperands().get(pos).accept(converter);
+      colExprMap.put(exprNames.get(pos), exprCol);
+      exprCols.add(exprCol);
+
+      ColumnInfo columnInfo = new ColumnInfo(fieldNames.get(pos), 
exprCol.getWritableObjectInspector(), null, false);
+      rowResolver.put(columnInfo.getTabAlias(), columnInfo.getAlias(), 
columnInfo);
+    }
+
+    QB qb = new QB(semanticAnalyzer.getQB().getId(), nextAlias(), true);
+    qb.getMetaData().setSrcForAlias(SemanticAnalyzer.DUMMY_TABLE, 
semanticAnalyzer.getDummyTable());
+    TableScanOperator op = (TableScanOperator) 
semanticAnalyzer.genTablePlan(SemanticAnalyzer.DUMMY_TABLE, qb);
+    op.getConf().setRowLimit(1);
+    qb.addAlias(SemanticAnalyzer.DUMMY_TABLE);
+    qb.setTabAlias(SemanticAnalyzer.DUMMY_TABLE, SemanticAnalyzer.DUMMY_TABLE);
+
+    Operator<?> output = OperatorFactory.getAndMakeChild(new 
SelectDesc(exprCols, fieldNames, false),
+        new RowSchema(rowResolver.getRowSchema()), op);
+    output.setColumnExprMap(colExprMap);
+    semanticAnalyzer.putOpInsertMap(output, rowResolver);
+
+    Operator<?> funcOp = semanticAnalyzer.genUDTFPlan(genericUDTF, null, 
fieldNames, qb, output, false);
 
 Review comment:
   Moving `genUDTFPlan` to `HiveOpConverter` would mean that it should be 
copied from the `SemanticAnalyzer`, and having a duplication before the old one 
get's removed? It's ok for me, but we should also consider that 
`HiveOpConverter` is already 1300 lines long, and it wouldn't be wise to create 
another monster class. So I'm ok with moving it here, but then soon we should 
have a jira for creating a better design here too, and cutting 
`HiveOpConverter` to pieces.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 346092)
    Time Spent: 40m  (was: 0.5h)

> Handle HiveTableFunctionScan at return path
> -------------------------------------------
>
>                 Key: HIVE-22369
>                 URL: https://issues.apache.org/jira/browse/HIVE-22369
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Hive
>            Reporter: Miklos Gergely
>            Assignee: Miklos Gergely
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 4.0.0
>
>         Attachments: HIVE-22369.01.patch
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> The 
> [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573]
>  at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by 
> CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a 
> [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4831]
>  or a 
> [HiveTableFunctionScan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4776].
>  When HiveCalciteUtil.getTopLevelSelect is invoked on this it is looking for 
> a 
> [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java#L633]
>  node in the tree, which if won't find in case of a HiveTableFunctionScan was 
> returned. This is why TestNewGetSplitsFormat is failing with return path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to