JingsongLi commented on a change in pull request #8678: [FLINK-12708][table] 
Introduce new source and sink interfaces to make Blink runner work
URL: https://github.com/apache/flink/pull/8678#discussion_r292300596
 
 

 ##########
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/nodes/physical/batch/BatchExecSink.scala
 ##########
 @@ -77,11 +78,12 @@ class BatchExecSink[T](
   override protected def translateToPlanInternal(
       tableEnv: BatchTableEnvironment): StreamTransformation[Any] = {
     val resultTransformation = sink match {
-      case batchTableSink: BatchTableSink[T] =>
+      case boundedTableSink: StreamTableSink[T] =>
+        // we can insert the bounded DataStream into a StreamTableSink
         val transformation = translateToStreamTransformation(withChangeFlag = 
false, tableEnv)
         val boundedStream = new DataStream(tableEnv.streamEnv, transformation)
-        batchTableSink.emitBoundedStream(
-          boundedStream, tableEnv.getConfig, 
tableEnv.streamEnv.getConfig).getTransformation
+        boundedTableSink.emitDataStream(boundedStream)
 
 Review comment:
   According to the current design:
   withChangeFlag should be false when source is BoundedTableSink
   withChangeFlag should be true when source is BoundedTableSink
   I think it's better to separate them and separate BoundedTableSink from 
DataStream.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to