[ 
https://issues.apache.org/jira/browse/FLINK-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144574#comment-15144574
 ] 

ASF GitHub Bot commented on FLINK-3226:
---------------------------------------

Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1624#discussion_r52740335
  
    --- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/codegen/CodeGenerator.scala
 ---
    @@ -231,29 +252,40 @@ class CodeGenerator(
         * be reused, they will be added to reusable code sections internally. 
The evaluation result
         * may be stored in the global result variable (see [[outRecordTerm]]).
         *
    -    * @param fieldExprs
    +    * @param fieldExprs field expressions to be converted
         * @param returnType conversion target type. Type must have the same 
arity than fieldExprs.
    +    * @param resultFieldNames result field names necessary for a mapping 
to POJO fields.
         * @return instance of GeneratedExpression
         */
       def generateResultExpression(
           fieldExprs: Seq[GeneratedExpression],
    -      returnType: TypeInformation[_ <: Any])
    +      returnType: TypeInformation[_ <: Any],
    +      resultFieldNames: Seq[String])
         : GeneratedExpression = {
    -    // TODO disable arity check for Rows and derive row arity from 
fieldExprs
         // initial type check
         if (returnType.getArity != fieldExprs.length) {
           throw new CodeGenException("Arity of result type does not match 
number of expressions.")
         }
         // type check
         returnType match {
    +      case pt: PojoTypeInfo[_] =>
    +        fieldExprs.zipWithIndex foreach {
    --- End diff --
    
    Add a check that both `fieldExpr` and `resultFieldName` have the same 
length.


> Translate optimized logical Table API plans into physical plans representing 
> DataSet programs
> ---------------------------------------------------------------------------------------------
>
>                 Key: FLINK-3226
>                 URL: https://issues.apache.org/jira/browse/FLINK-3226
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table API
>            Reporter: Fabian Hueske
>            Assignee: Chengxiang Li
>
> This issue is about translating an (optimized) logical Table API (see 
> FLINK-3225) query plan into a physical plan. The physical plan is a 1-to-1 
> representation of the DataSet program that will be executed. This means:
> - Each Flink RelNode refers to exactly one Flink DataSet or DataStream 
> operator.
> - All (join and grouping) keys of Flink operators are correctly specified.
> - The expressions which are to be executed in user-code are identified.
> - All fields are referenced with their physical execution-time index.
> - Flink type information is available.
> - Optional: Add physical execution hints for joins
> The translation should be the final part of Calcite's optimization process.
> For this task we need to:
> - implement a set of Flink DataSet RelNodes. Each RelNode corresponds to one 
> Flink DataSet operator (Map, Reduce, Join, ...). The RelNodes must hold all 
> relevant operator information (keys, user-code expression, strategy hints, 
> parallelism).
> - implement rules to translate optimized Calcite RelNodes into Flink 
> RelNodes. We start with a straight-forward mapping and later add rules that 
> merge several relational operators into a single Flink operator, e.g., merge 
> a join followed by a filter. Timo implemented some rules for the first SQL 
> implementation which can be used as a starting point.
> - Integrate the translation rules into the Calcite optimization process



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to