[ https://issues.apache.org/jira/browse/FLINK-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144564#comment-15144564 ]
ASF GitHub Bot commented on FLINK-3226: --------------------------------------- Github user fhueske commented on a diff in the pull request: https://github.com/apache/flink/pull/1624#discussion_r52739425 --- Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/codegen/CodeGenerator.scala --- @@ -42,13 +42,30 @@ import scala.collection.mutable * @param config configuration that determines runtime behavior * @param input1 type information about the first input of the Function * @param input2 type information about the second input if the Function is binary + * @param inputPojoFieldMapping additional mapping information if input1 is a POJO (POJO types + * have no deterministic field order). We assume that input2 is + * converted before and thus is never a POJO. */ class CodeGenerator( - config: TableConfig, - input1: TypeInformation[Any], - input2: Option[TypeInformation[Any]] = None) + config: TableConfig, + input1: TypeInformation[Any], + input2: Option[TypeInformation[Any]] = None, + inputPojoFieldMapping: Array[Int] = Array()) extends RexVisitor[GeneratedExpression] { + /** + * A code generator for generating unary Flink + * [[org.apache.flink.api.common.functions.Function]]s with one input. + * + * @param config configuration that determines runtime behavior + * @param input type information about the input of the Function + * @param inputPojoFieldMapping additional mapping information necessary if input is a + * POJO (POJO types have no deterministic field order). + */ + def this(config: TableConfig, input: TypeInformation[Any], inputPojoFieldMapping: Array[Int]) = --- End diff -- Make fieldMapping optional? > Translate optimized logical Table API plans into physical plans representing > DataSet programs > --------------------------------------------------------------------------------------------- > > Key: FLINK-3226 > URL: https://issues.apache.org/jira/browse/FLINK-3226 > Project: Flink > Issue Type: Sub-task > Components: Table API > Reporter: Fabian Hueske > Assignee: Chengxiang Li > > This issue is about translating an (optimized) logical Table API (see > FLINK-3225) query plan into a physical plan. The physical plan is a 1-to-1 > representation of the DataSet program that will be executed. This means: > - Each Flink RelNode refers to exactly one Flink DataSet or DataStream > operator. > - All (join and grouping) keys of Flink operators are correctly specified. > - The expressions which are to be executed in user-code are identified. > - All fields are referenced with their physical execution-time index. > - Flink type information is available. > - Optional: Add physical execution hints for joins > The translation should be the final part of Calcite's optimization process. > For this task we need to: > - implement a set of Flink DataSet RelNodes. Each RelNode corresponds to one > Flink DataSet operator (Map, Reduce, Join, ...). The RelNodes must hold all > relevant operator information (keys, user-code expression, strategy hints, > parallelism). > - implement rules to translate optimized Calcite RelNodes into Flink > RelNodes. We start with a straight-forward mapping and later add rules that > merge several relational operators into a single Flink operator, e.g., merge > a join followed by a filter. Timo implemented some rules for the first SQL > implementation which can be used as a starting point. > - Integrate the translation rules into the Calcite optimization process -- This message was sent by Atlassian JIRA (v6.3.4#6332)