[ https://issues.apache.org/jira/browse/FLINK-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143300#comment-15143300 ]
ASF GitHub Bot commented on FLINK-3226: --------------------------------------- GitHub user twalthr opened a pull request: https://github.com/apache/flink/pull/1624 [FLINK-3226] Translation from and to POJOs for CodeGenerator This PR implements full POJO support as input and output type of the Table API. It is now possible to convert from an arbitrary type to an other arbitrary type. I fixed the failling AsITCase and implemented additional tests from/to tuples, from/to POJOs and from/to Case classes. @vasia and @fhueske feel free to review. You can merge this pull request into a Git repository by running: $ git pull https://github.com/twalthr/flink PojoSupport Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1624.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1624 ---- commit 2322d9f602c4b7e01a582958f918372efc214848 Author: twalthr <twal...@apache.org> Date: 2016-02-11T15:16:29Z [FLINK-3226] Translation from and to POJOs for CodeGenerator ---- > Translate optimized logical Table API plans into physical plans representing > DataSet programs > --------------------------------------------------------------------------------------------- > > Key: FLINK-3226 > URL: https://issues.apache.org/jira/browse/FLINK-3226 > Project: Flink > Issue Type: Sub-task > Components: Table API > Reporter: Fabian Hueske > Assignee: Chengxiang Li > > This issue is about translating an (optimized) logical Table API (see > FLINK-3225) query plan into a physical plan. The physical plan is a 1-to-1 > representation of the DataSet program that will be executed. This means: > - Each Flink RelNode refers to exactly one Flink DataSet or DataStream > operator. > - All (join and grouping) keys of Flink operators are correctly specified. > - The expressions which are to be executed in user-code are identified. > - All fields are referenced with their physical execution-time index. > - Flink type information is available. > - Optional: Add physical execution hints for joins > The translation should be the final part of Calcite's optimization process. > For this task we need to: > - implement a set of Flink DataSet RelNodes. Each RelNode corresponds to one > Flink DataSet operator (Map, Reduce, Join, ...). The RelNodes must hold all > relevant operator information (keys, user-code expression, strategy hints, > parallelism). > - implement rules to translate optimized Calcite RelNodes into Flink > RelNodes. We start with a straight-forward mapping and later add rules that > merge several relational operators into a single Flink operator, e.g., merge > a join followed by a filter. Timo implemented some rules for the first SQL > implementation which can be used as a starting point. > - Integrate the translation rules into the Calcite optimization process -- This message was sent by Atlassian JIRA (v6.3.4#6332)