[ https://issues.apache.org/jira/browse/FLINK-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137685#comment-15137685 ]
ASF GitHub Bot commented on FLINK-3226: --------------------------------------- Github user twalthr commented on a diff in the pull request: https://github.com/apache/flink/pull/1600#discussion_r52225277 --- Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/functions/AggregateFunction.scala --- @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.flink.api.table.plan.functions + +import java.lang.Iterable +import com.google.common.base.Preconditions +import org.apache.flink.api.common.functions.RichGroupReduceFunction +import org.apache.flink.api.table.plan.functions.aggregate.Aggregate +import org.apache.flink.configuration.Configuration +import org.apache.flink.util.Collector +import scala.collection.JavaConversions._ +import org.apache.flink.api.table.Row + +/** + * A wrapper Flink GroupReduceOperator UDF of aggregates. It takes the grouped data as input, + * feed to the aggregates, and collect the record with aggregated value. + * + * @param aggregates SQL aggregate functions. + * @param fields The grouped keys' indices in the input. + * @param groupingKeys The grouping keys' positions. + */ +class AggregateFunction( + private val aggregates: Array[Aggregate[_ <: Any]], + private val fields: Array[Int], + private val groupingKeys: Array[Int]) extends RichGroupReduceFunction[Row, Row] { --- End diff -- I would move everything there that is needed during runtime. `AggregateFunction` in `org.apache.flink.table.runtiime` and helper classes in sub-packages. > Translate optimized logical Table API plans into physical plans representing > DataSet programs > --------------------------------------------------------------------------------------------- > > Key: FLINK-3226 > URL: https://issues.apache.org/jira/browse/FLINK-3226 > Project: Flink > Issue Type: Sub-task > Components: Table API > Reporter: Fabian Hueske > Assignee: Chengxiang Li > > This issue is about translating an (optimized) logical Table API (see > FLINK-3225) query plan into a physical plan. The physical plan is a 1-to-1 > representation of the DataSet program that will be executed. This means: > - Each Flink RelNode refers to exactly one Flink DataSet or DataStream > operator. > - All (join and grouping) keys of Flink operators are correctly specified. > - The expressions which are to be executed in user-code are identified. > - All fields are referenced with their physical execution-time index. > - Flink type information is available. > - Optional: Add physical execution hints for joins > The translation should be the final part of Calcite's optimization process. > For this task we need to: > - implement a set of Flink DataSet RelNodes. Each RelNode corresponds to one > Flink DataSet operator (Map, Reduce, Join, ...). The RelNodes must hold all > relevant operator information (keys, user-code expression, strategy hints, > parallelism). > - implement rules to translate optimized Calcite RelNodes into Flink > RelNodes. We start with a straight-forward mapping and later add rules that > merge several relational operators into a single Flink operator, e.g., merge > a join followed by a filter. Timo implemented some rules for the first SQL > implementation which can be used as a starting point. > - Integrate the translation rules into the Calcite optimization process -- This message was sent by Atlassian JIRA (v6.3.4#6332)