hequn8128 commented on a change in pull request #6787: [FLINK-8577][table] 
Implement proctime DataStream to Table upsert conversion
URL: https://github.com/apache/flink/pull/6787#discussion_r237328431
 
 

 ##########
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/calcite/RelTimeIndicatorConverter.scala
 ##########
 @@ -165,7 +165,30 @@ class RelTimeIndicatorConverter(rexBuilder: RexBuilder) 
extends RelShuttle {
   override def visit(exchange: LogicalExchange): RelNode =
     throw new TableException("Logical exchange in a stream environment is not 
supported yet.")
 
-  override def visit(scan: TableScan): RelNode = scan
+  override def visit(scan: TableScan): RelNode = {
+    val upsertStreamTable = scan.getTable.unwrap(classOf[UpsertStreamTable[_]])
+    if (upsertStreamTable != null) {
+      val relTypes = scan.getRowType.getFieldList.map(_.getType)
+      val timeIndicatorIndexes = relTypes.zipWithIndex
+        .filter(e => FlinkTypeFactory.isTimeIndicatorType(e._1))
+        .map(_._2)
+      val input = if (timeIndicatorIndexes.nonEmpty) {
+        // materialize time indicator
+        val rewrittenScan = scan.copy(scan.getTraitSet, scan.getInputs)
+        materializerUtils.projectAndMaterializeFields(rewrittenScan, 
timeIndicatorIndexes.toSet)
+      } else {
+        scan
+      }
+
+      LogicalLastRow.create(
 
 Review comment:
   You are right. 
   Currently, there is no such no-op node in the execution plan. If you still 
concern with the no-op node in the optimization plan, I can add a rule in the 
RetractionRules to remove it. In this way, we can name `LastRow` to 
`UpsertToRetractionConverter`. What do you think?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to