[ https://issues.apache.org/jira/browse/FLINK-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16547700#comment-16547700 ]
ASF GitHub Bot commented on FLINK-9852: --------------------------------------- Github user pnowojski commented on a diff in the pull request: https://github.com/apache/flink/pull/6343#discussion_r203303655 --- Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalTableUtil.scala --- @@ -0,0 +1,148 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.catalog + +import java.util + +import org.apache.flink.table.api._ +import org.apache.flink.table.descriptors.DescriptorProperties +import org.apache.flink.table.factories._ +import org.apache.flink.table.plan.schema._ +import org.apache.flink.table.plan.stats.FlinkStatistic +import org.apache.flink.table.util.Logging + + +/** + * The utility class is used to convert [[ExternalCatalogTable]] to [[TableSourceSinkTable]]. + * + * It uses [[TableFactoryService]] for discovering. + */ +object ExternalTableUtil extends Logging { + + /** + * Converts an [[ExternalCatalogTable]] instance to a [[TableSourceTable]] instance + * + * @param externalCatalogTable the [[ExternalCatalogTable]] instance which to convert + * @return converted [[TableSourceTable]] instance from the input catalog table + */ + def fromExternalCatalogTable[T1, T2]( + tableEnv: TableEnvironment, + externalCatalogTable: ExternalCatalogTable) + : TableSourceSinkTable[T1, T2] = { + + val properties = new DescriptorProperties() + externalCatalogTable.addProperties(properties) + val javaMap = properties.asMap + val statistics = new FlinkStatistic(externalCatalogTable.getTableStats) + + val source: Option[TableSourceTable[T1]] = tableEnv match { + // check for a batch table source in this batch environment + case _: BatchTableEnvironment if externalCatalogTable.isBatchTable => + createBatchTableSource(externalCatalogTable, javaMap, statistics) + + // check for a stream table source in this stream environment + case _: StreamTableEnvironment if externalCatalogTable.isStreamTable => + createStreamTableSource(externalCatalogTable, javaMap, statistics) + + case _ => + throw new ValidationException( + "External catalog table does not support the current environment for a table source.") + } + + val sink: Option[TableSinkTable[T2]] = tableEnv match { + // check for a batch table sink in this batch environment + case _: BatchTableEnvironment if externalCatalogTable.isBatchTable => + createBatchTableSink(externalCatalogTable, javaMap, statistics) + + // check for a stream table sink in this stream environment + case _: StreamTableEnvironment if externalCatalogTable.isStreamTable => + createStreamTableSink(externalCatalogTable, javaMap, statistics) + + case _ => + throw new ValidationException( + "External catalog table does not support the current environment for a table sink.") + } + + new TableSourceSinkTable[T1, T2](source, sink) + } + + private def createBatchTableSource[T]( + externalCatalogTable: ExternalCatalogTable, + javaMap: util.Map[String, String], + statistics: FlinkStatistic) + : Option[TableSourceTable[T]] = if (externalCatalogTable.isTableSource) { --- End diff -- Is that good enough reason to force us both to write a code that we do not like? Apparently not only we don't like it: https://stackoverflow.com/a/33425307/8149051 After reading your link I get why you shouldn't use return in lambda functions, but he doesn't give a point against using them in methods. At least I do not see it. However if you are not sure, at least reverse if/else branches. Simpler branch should always go first. > Expose descriptor-based sink creation in table environments > ----------------------------------------------------------- > > Key: FLINK-9852 > URL: https://issues.apache.org/jira/browse/FLINK-9852 > Project: Flink > Issue Type: New Feature > Components: Table API & SQL > Affects Versions: 1.6.0 > Reporter: Timo Walther > Assignee: Timo Walther > Priority: Major > Labels: pull-request-available > > Currently, only a table source can be created using the unified table > descriptors with {{tableEnv.from(...)}}. A similar approach should be > supported for defining sinks or even both types at the same time. > I suggest the following syntax: > {code} > tableEnv.connect(Kafka(...)).registerSource("name") > tableEnv.connect(Kafka(...)).registerSink("name") > tableEnv.connect(Kafka(...)).registerSourceAndSink("name") > {code} > A table could then access the registered source/sink. -- This message was sent by Atlassian JIRA (v7.6.3#76005)