YuvalItzchakov commented on pull request #15307: URL: https://github.com/apache/flink/pull/15307#issuecomment-814253822
@fsk119 After registering the tables in `PushFilterInCalcIntoTableSourceRuleTest`, now only one test fails from `PushFilterIntoLegacyTableSourceScanRuleTest`: org.apache.flink.table.api.TableException: Cannot generate a valid execution plan for the given query: LogicalProject(a=[$0], b=[$1]) +- LogicalFilter(condition=[OR(>=(+($0, *(3600000:INTERVAL HOUR, 5)), $1), >=(+($1, *(12:INTERVAL YEAR, 2)), $0))]) +- LogicalTableScan(table=[[default_catalog, default_database, MTable, source: [filterPushedDown=[false], filter=[]]]]) This exception indicates that the query uses an unsupported SQL feature. Please check the documentation for the set of currently supported SQL features. at org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:72) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58) at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57) at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:163) at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:79) at org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77) at org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:281) at org.apache.flink.table.planner.utils.TableTestUtilBase.assertPlanEquals(TableTestBase.scala:889) at org.apache.flink.table.planner.utils.TableTestUtilBase.doVerifyPlan(TableTestBase.scala:780) at org.apache.flink.table.planner.utils.TableTestUtilBase.verifyRelPlan(TableTestBase.scala:400) at org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoLegacyTableSourceScanRuleTest.testWithInterval(PushFilterIntoLegacyTableSourceScanRuleTest.scala:189) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53) Caused by: org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not enough rules to produce a node with desired properties: convention=LOGICAL, FlinkRelDistributionTraitDef=any, MiniBatchIntervalTraitDef=None: 0, ModifyKindSetTraitDef=[NONE], UpdateKindTraitDef=[NONE]. Missing conversion is LogicalTableScan[convention: NONE -> LOGICAL] There is 1 empty subset: rel#176:RelSubset#0.LOGICAL.any.None: 0.[NONE].[NONE], the relevant part of the original plan is as follows 164:LogicalTableScan(table=[[default_catalog, default_database, MTable, source: [filterPushedDown=[false], filter=[]]]]) Root: rel#178:RelSubset#1.LOGICAL.any.None: 0.[NONE].[NONE] Original rel: LogicalProject(subset=[rel#172:RelSubset#2.LOGICAL.any.None: 0.[NONE].[NONE]], a=[$0], b=[$1]): rowcount = 7.5E7, cumulative cost = {7.5E7 rows, 1.5E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 170 LogicalFilter(subset=[rel#169:RelSubset#1.NONE.any.None: 0.[NONE].[NONE]], condition=[OR(>=(+($0, *(3600000:INTERVAL HOUR, 5)), $1), >=(+($1, *(12:INTERVAL YEAR, 2)), $0))]): rowcount = 7.5E7, cumulative cost = {7.5E7 rows, 1.0E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 168 LogicalTableScan(subset=[rel#167:RelSubset#0.NONE.any.None: 0.[NONE].[NONE]], table=[[default_catalog, default_database, MTable, source: [filterPushedDown=[false], filter=[]]]]): rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.00000001E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 164 Sets: Set#0, type: RecordType(TIMESTAMP(6) a, TIMESTAMP(6) b) rel#167:RelSubset#0.NONE.any.None: 0.[NONE].[NONE], best=null rel#164:LogicalTableScan.NONE.any.None: 0.[NONE].[NONE](table=[default_catalog, default_database, MTable, source: [filterPushedDown=[false], filter=[]]]), rowcount=1.0E8, cumulative cost={inf} rel#176:RelSubset#0.LOGICAL.any.None: 0.[NONE].[NONE], best=null Set#1, type: RecordType(TIMESTAMP(6) a, TIMESTAMP(6) b) rel#169:RelSubset#1.NONE.any.None: 0.[NONE].[NONE], best=null rel#168:LogicalFilter.NONE.any.None: 0.[NONE].[NONE](input=RelSubset#167,condition=OR(>=(+($0, *(3600000:INTERVAL HOUR, 5)), $1), >=(+($1, *(12:INTERVAL YEAR, 2)), $0))), rowcount=7.5E7, cumulative cost={inf} rel#174:LogicalCalc.NONE.any.None: 0.[NONE].[NONE](input=RelSubset#167,expr#0..1={inputs},expr#2=3600000:INTERVAL HOUR,expr#3=5,expr#4=*($t2, $t3),expr#5=+($t0, $t4),expr#6=>=($t5, $t1),expr#7=12:INTERVAL YEAR,expr#8=2,expr#9=*($t7, $t8),expr#10=+($t1, $t9),expr#11=>=($t10, $t0),expr#12=OR($t6, $t11),proj#0..1={exprs},$condition=$t12), rowcount=7.5E7, cumulative cost={inf} rel#178:RelSubset#1.LOGICAL.any.None: 0.[NONE].[NONE], best=null rel#177:FlinkLogicalCalc.LOGICAL.any.None: 0.[NONE].[NONE](input=RelSubset#176,select=a, b,where=OR(>=(+(a, *(3600000:INTERVAL HOUR, 5)), b), >=(+(b, *(12:INTERVAL YEAR, 2)), a))), rowcount=7.5E7, cumulative cost={inf} rel#173:AbstractConverter.LOGICAL.any.None: 0.[NONE].[NONE](input=RelSubset#169,convention=LOGICAL,FlinkRelDistributionTraitDef=any,MiniBatchIntervalTraitDef=None: 0,ModifyKindSetTraitDef=[NONE],UpdateKindTraitDef=[NONE]), rowcount=7.5E7, cumulative cost={inf} It does work when I run it directly from the class itself without inheritence, I assume this is due to `util` being `StreamTestUtil` and not `BatchTestUtil`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org