[jira] [Updated] (FLINK-22678) Hide ChangelogStateBackend From Users
[ https://issues.apache.org/jira/browse/FLINK-22678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Khachatryan updated FLINK-22678: -- Component/s: Runtime / State Backends > Hide ChangelogStateBackend From Users > -- > > Key: FLINK-22678 > URL: https://issues.apache.org/jira/browse/FLINK-22678 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Yuan Mei >Assignee: Yuan Mei >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0 > > > Per discussion on the mailing thread: > https://lists.apache.org/thread.html/ra178ce29088b1da362d98a5a6d8c7be48051caf1637ee24261738217%40%3Cdev.flink.apache.org%3E > We decide to make a refined version of loading ChangelogStateBackend: > - Define consistent override and combination policy (flag + state backend) > in different config levels > - Define explicitly the meaning of "enable flag" = true/false/unset > - Hide ChangelogStateBackend from users > Details described in > https://docs.google.com/document/d/13AaCf5fczYTDHZ4G1mgYL685FqbnoEhgo0cdwuJlZmw/edit?usp=sharing -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23030) PartitionRequestClientFactory#createPartitionRequestClient should throw when network failure
[ https://issues.apache.org/jira/browse/FLINK-23030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jin Xing updated FLINK-23030: - Description: In current _PartitionRequestClientFactory#createPartitionRequestClient_, _ChannelFuture#await()_ is invoked, thus to build a connection to remote synchronously. But with the doc of _io.netty.util.concurrent.Future_ [1] and its implementation _io.netty.channel.DefaultChannelPromise_ [2], _ChannelFuture#await()_ never throws when completed with failure. I guess what Flink needs is _ChannelFuture#sync()._ [1] [https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html] [2] [https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/DefaultChannelPromise.java] was: In current _PartitionRequestClientFactory#createPartitionRequestClient_, _ChannelFuture#await()_ is invoked, thus to build a connection to remote synchronously. But with the doc of _io.netty.util.concurrent.Future_ [1] and its implementation _io.netty.channel.DefaultChannelPromise_ [2], _ChannelFuture#await()_ never throws when completed with failure. I guess what Flink needs is _ChannelFuture#sync()._ [[1] https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html|https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html] [2] [https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/DefaultChannelPromise.java] > PartitionRequestClientFactory#createPartitionRequestClient should throw when > network failure > > > Key: FLINK-23030 > URL: https://issues.apache.org/jira/browse/FLINK-23030 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Reporter: Jin Xing >Priority: Blocker > > In current _PartitionRequestClientFactory#createPartitionRequestClient_, > _ChannelFuture#await()_ is invoked, thus to build a connection to remote > synchronously. > But with the doc of _io.netty.util.concurrent.Future_ [1] and its > implementation _io.netty.channel.DefaultChannelPromise_ [2], > _ChannelFuture#await()_ never throws when completed with failure. I guess > what Flink needs is _ChannelFuture#sync()._ > [1] [https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html] > [2] > [https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/DefaultChannelPromise.java] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23030) PartitionRequestClientFactory#createPartitionRequestClient should throw when network failure
Jin Xing created FLINK-23030: Summary: PartitionRequestClientFactory#createPartitionRequestClient should throw when network failure Key: FLINK-23030 URL: https://issues.apache.org/jira/browse/FLINK-23030 Project: Flink Issue Type: Bug Components: Runtime / Network Reporter: Jin Xing In current _PartitionRequestClientFactory#createPartitionRequestClient_, _ChannelFuture#await()_ is invoked, thus to build a connection to remote synchronously. But with the doc of _io.netty.util.concurrent.Future_ [1] and its implementation _io.netty.channel.DefaultChannelPromise_ [2], _ChannelFuture#await()_ never throws when completed with failure. I guess what Flink needs is _ChannelFuture#sync()._ [1][https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html] [2] https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/DefaultChannelPromise.java -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23030) PartitionRequestClientFactory#createPartitionRequestClient should throw when network failure
[ https://issues.apache.org/jira/browse/FLINK-23030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jin Xing updated FLINK-23030: - Description: In current _PartitionRequestClientFactory#createPartitionRequestClient_, _ChannelFuture#await()_ is invoked, thus to build a connection to remote synchronously. But with the doc of _io.netty.util.concurrent.Future_ [1] and its implementation _io.netty.channel.DefaultChannelPromise_ [2], _ChannelFuture#await()_ never throws when completed with failure. I guess what Flink needs is _ChannelFuture#sync()._ [[1] https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html|https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html] [2] [https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/DefaultChannelPromise.java] was: In current _PartitionRequestClientFactory#createPartitionRequestClient_, _ChannelFuture#await()_ is invoked, thus to build a connection to remote synchronously. But with the doc of _io.netty.util.concurrent.Future_ [1] and its implementation _io.netty.channel.DefaultChannelPromise_ [2], _ChannelFuture#await()_ never throws when completed with failure. I guess what Flink needs is _ChannelFuture#sync()._ [1][https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html] [2] https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/DefaultChannelPromise.java > PartitionRequestClientFactory#createPartitionRequestClient should throw when > network failure > > > Key: FLINK-23030 > URL: https://issues.apache.org/jira/browse/FLINK-23030 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Reporter: Jin Xing >Priority: Blocker > > In current _PartitionRequestClientFactory#createPartitionRequestClient_, > _ChannelFuture#await()_ is invoked, thus to build a connection to remote > synchronously. > But with the doc of _io.netty.util.concurrent.Future_ [1] and its > implementation _io.netty.channel.DefaultChannelPromise_ [2], > _ChannelFuture#await()_ never throws when completed with failure. I guess > what Flink needs is _ChannelFuture#sync()._ > > [[1] > https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html|https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html] > [2] > [https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/DefaultChannelPromise.java] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23026) OVER WINDOWS function lost data
[ https://issues.apache.org/jira/browse/FLINK-23026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365286#comment-17365286 ] Jark Wu commented on FLINK-23026: - This is as expected, because your watermark value is {{"onSellTime - INTERVAL '0' SECOND"}}. According to watermark definition: > A watermark signifies that no events with a timestamp **smaller or equal** to > the watermark's time will occur after the water. That means, your second record {{ITEM002,Electronic,2017-11-11 10:02:00,50}} is recognized as late event and will be dropped, results for timestamp {{2017-11-11 10:02:00}} has been outputed. > OVER WINDOWS function lost data > --- > > Key: FLINK-23026 > URL: https://issues.apache.org/jira/browse/FLINK-23026 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Table SQL / Client >Affects Versions: 1.12.1 >Reporter: MOBIN >Priority: Critical > Attachments: image-2021-06-18-10-54-18-125.png > > > {code:java} > Flink SQL> CREATE TABLE tmall_item( > > itemID VARCHAR, > > itemType VARCHAR, > > eventtime varchar, > > onSellTime AS TO_TIMESTAMP(eventtime), > > price DOUBLE, > > WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND > > ) with ( > > 'connector.type' = 'kafka', > >'connector.version' = 'universal', > >'connector.topic' = 'items', > >'format.type' = 'csv', > >'connector.properties.bootstrap.servers' = 'localhost:9092' > > ); > > > [INFO] Table has been created. > Flink SQL> SELECT > > itemType, > > COUNT(itemID) OVER ( > > PARTITION BY itemType > > ORDER BY onSellTime > > RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot > > FROM tmall_item; > {code} > When I enter the following data into the topic, its Electronic count value is > 3, which should normally be 4. If the event time and the value of the > partition field are the same, data will be lost > ITEM001,Electronic,2017-11-11 10:01:00,20 > {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 > 10:02:00{color},50 > {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 > 10:02:00{color},50 > ITEM003,Electronic,2017-11-11 10:03:00,50 > !image-2021-06-18-10-54-18-125.png|width=1156,height=192! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-23026) OVER WINDOWS function lost data
[ https://issues.apache.org/jira/browse/FLINK-23026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-23026. --- Resolution: Not A Problem > OVER WINDOWS function lost data > --- > > Key: FLINK-23026 > URL: https://issues.apache.org/jira/browse/FLINK-23026 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Table SQL / Client >Affects Versions: 1.12.1 >Reporter: MOBIN >Priority: Critical > Attachments: image-2021-06-18-10-54-18-125.png > > > {code:java} > Flink SQL> CREATE TABLE tmall_item( > > itemID VARCHAR, > > itemType VARCHAR, > > eventtime varchar, > > onSellTime AS TO_TIMESTAMP(eventtime), > > price DOUBLE, > > WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND > > ) with ( > > 'connector.type' = 'kafka', > >'connector.version' = 'universal', > >'connector.topic' = 'items', > >'format.type' = 'csv', > >'connector.properties.bootstrap.servers' = 'localhost:9092' > > ); > > > [INFO] Table has been created. > Flink SQL> SELECT > > itemType, > > COUNT(itemID) OVER ( > > PARTITION BY itemType > > ORDER BY onSellTime > > RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot > > FROM tmall_item; > {code} > When I enter the following data into the topic, its Electronic count value is > 3, which should normally be 4. If the event time and the value of the > partition field are the same, data will be lost > ITEM001,Electronic,2017-11-11 10:01:00,20 > {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 > 10:02:00{color},50 > {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 > 10:02:00{color},50 > ITEM003,Electronic,2017-11-11 10:03:00,50 > !image-2021-06-18-10-54-18-125.png|width=1156,height=192! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23030) PartitionRequestClientFactory#createPartitionRequestClient should throw when network failure
[ https://issues.apache.org/jira/browse/FLINK-23030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jin Xing updated FLINK-23030: - Description: In current _PartitionRequestClientFactory#createPartitionRequestClient_, _ChannelFuture#await()_ is invoked, thus to build a connection to remote synchronously. But with the doc of _io.netty.util.concurrent.Future_ [1] and its implementation _io.netty.channel.DefaultChannelPromise_ [2], _ChannelFuture#await()_ never throws when completed with failure. I guess what Flink needs is _ChannelFuture#sync()._ [1] [https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html] [2] [https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/DefaultChannelPromise.java] https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/concurrent/DefaultPromise.java was: In current _PartitionRequestClientFactory#createPartitionRequestClient_, _ChannelFuture#await()_ is invoked, thus to build a connection to remote synchronously. But with the doc of _io.netty.util.concurrent.Future_ [1] and its implementation _io.netty.channel.DefaultChannelPromise_ [2], _ChannelFuture#await()_ never throws when completed with failure. I guess what Flink needs is _ChannelFuture#sync()._ [1] [https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html] [2] [https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/DefaultChannelPromise.java] > PartitionRequestClientFactory#createPartitionRequestClient should throw when > network failure > > > Key: FLINK-23030 > URL: https://issues.apache.org/jira/browse/FLINK-23030 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Reporter: Jin Xing >Priority: Blocker > > In current _PartitionRequestClientFactory#createPartitionRequestClient_, > _ChannelFuture#await()_ is invoked, thus to build a connection to remote > synchronously. > But with the doc of _io.netty.util.concurrent.Future_ [1] and its > implementation _io.netty.channel.DefaultChannelPromise_ [2], > _ChannelFuture#await()_ never throws when completed with failure. I guess > what Flink needs is _ChannelFuture#sync()._ > [1] [https://netty.io/4.1/api/io/netty/util/concurrent/class-use/Future.html] > [2] > [https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/DefaultChannelPromise.java] > > https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/concurrent/DefaultPromise.java -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22954) Don't support consuming update and delete changes when use table function that does not contain table field
[ https://issues.apache.org/jira/browse/FLINK-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365301#comment-17365301 ] Wenlong Lyu commented on FLINK-22954: - [~godfreyhe] thanks for correct me, you are right. so when the function is not deterministic, it is still not supported now? I have tried to add the rewrite rule, with this pr planner can work well on join constant table function: https://github.com/apache/flink/pull/16192 I am thinking about that maybe we should remove the limitation of RexUtil.isConstant on ConstantTableFunctionScanRule, because ConstantTableFunctionScanRule only works on: select * from lateral table(XXX), in this case it is ok to support non-deterministic function. what do you think > Don't support consuming update and delete changes when use table function > that does not contain table field > --- > > Key: FLINK-22954 > URL: https://issues.apache.org/jira/browse/FLINK-22954 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.12.0 >Reporter: hehuiyuan >Priority: Major > > {code:java} > Exception in thread "main" org.apache.flink.table.api.TableException: Table > sink 'default_catalog.default_database.kafkaTableSink' doesn't support > consuming update and delete changes which is produced by node > Join(joinType=[LeftOuterJoin], where=[true], select=[name, word], > leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey])Exception in thread > "main" org.apache.flink.table.api.TableException: Table sink > 'default_catalog.default_database.kafkaTableSink' doesn't support consuming > update and delete changes which is produced by node > Join(joinType=[LeftOuterJoin], where=[true], select=[name, word], > leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey]) at > org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.createNewNode(FlinkChangelogModeInferenceProgram.scala:382) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:265) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.org$apache$flink$table$planner$plan$optimize$program$FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$visitChild(FlinkChangelogModeInferenceProgram.scala:341) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$anonfun$3.apply(FlinkChangelogModeInferenceProgram.scala:330) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$anonfun$3.apply(FlinkChangelogModeInferenceProgram.scala:329) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.Range.foreach(Range.scala:160) at > scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at > scala.collection.AbstractTraversable.map(Traversable.scala:104) at > org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visitChildren(FlinkChangelogModeInferenceProgram.scala:329) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:279) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.org$apache$flink$table$planner$plan$optimize$program$FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$visitChild(FlinkChangelogModeInferenceProgram.scala:341) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$anonfun$3.apply(FlinkChangelogModeInferenceProgram.scala:330) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor$$anonfun$3.apply(FlinkChangelogModeInferenceProgram.scala:329) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.Range.foreach(Range.scala:160) at > scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at > scala.collection.AbstractTraversable.map(Traversable.scala:104) at > org.apache.flink.table.pla
[jira] [Created] (FLINK-23031) Support to emit window result with periodic or non_periodic
Aitozi created FLINK-23031: -- Summary: Support to emit window result with periodic or non_periodic Key: FLINK-23031 URL: https://issues.apache.org/jira/browse/FLINK-23031 Project: Flink Issue Type: Improvement Components: Table SQL / Planner Reporter: Aitozi -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (FLINK-22815) Disable unaligned checkpoints for broadcast partitioning
[ https://issues.apache.org/jira/browse/FLINK-22815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Wysakowicz reopened FLINK-22815: -- Reopen to add release notes. > Disable unaligned checkpoints for broadcast partitioning > > > Key: FLINK-22815 > URL: https://issues.apache.org/jira/browse/FLINK-22815 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.13.1, 1.12.4 >Reporter: Dawid Wysakowicz >Assignee: Dawid Wysakowicz >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.4, 1.14.0, 1.12.5, 1.13.2 > > > Broadcast partitioning can not work with unaligned checkpointing. There > are no guarantees that records are consumed at the same rate in all > channels. This can result in some tasks applying state changes > corresponding to a certain broadcasted event while others don't. In turn > upon restore, it may lead to an inconsistent state. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-22815) Disable unaligned checkpoints for broadcast partitioning
[ https://issues.apache.org/jira/browse/FLINK-22815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Wysakowicz closed FLINK-22815. Release Note: Unaligned checkpoints were disabled for BROADCAST exchanges. Broadcast partitioning can not work with unaligned checkpointing. There are no guarantees that records are consumed at the same rate in all channels. This can result in some tasks applying state changes corresponding to a certain broadcasted event while others don't. In turn upon restore, it may lead to an inconsistent state. Resolution: Fixed > Disable unaligned checkpoints for broadcast partitioning > > > Key: FLINK-22815 > URL: https://issues.apache.org/jira/browse/FLINK-22815 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.13.1, 1.12.4 >Reporter: Dawid Wysakowicz >Assignee: Dawid Wysakowicz >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.4, 1.14.0, 1.12.5, 1.13.2 > > > Broadcast partitioning can not work with unaligned checkpointing. There > are no guarantees that records are consumed at the same rate in all > channels. This can result in some tasks applying state changes > corresponding to a certain broadcasted event while others don't. In turn > upon restore, it may lead to an inconsistent state. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23031) Support to emit window result with periodic or non_periodic
[ https://issues.apache.org/jira/browse/FLINK-23031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365306#comment-17365306 ] Aitozi commented on FLINK-23031: In flink sql, we can use EmitStrategy to set an early or late trigger with delay . It will generate a processing time trigger at runtime. We notice that the default processing timer is triggered periodically, which means that if one group key comes only once in a window lifetime, it will be triggered every delay time passed , no matter whether the data of group key comes or not. It will generated much timer , but the trigger result has not changed. So I think we can add an option to support trigger window periodic or non_periodic. > Support to emit window result with periodic or non_periodic > --- > > Key: FLINK-23031 > URL: https://issues.apache.org/jira/browse/FLINK-23031 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Reporter: Aitozi >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-22625) FileSinkMigrationITCase unstable
[ https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski reassigned FLINK-22625: -- Assignee: Anton Kalashnikov (was: Yun Gao) > FileSinkMigrationITCase unstable > > > Key: FLINK-22625 > URL: https://issues.apache.org/jira/browse/FLINK-22625 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Affects Versions: 1.14.0 >Reporter: Dawid Wysakowicz >Assignee: Anton Kalashnikov >Priority: Major > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817&view=logs&j=5cae8624-c7eb-5c51-92d3-4d2dacedd221&t=420bd9ec-164e-562e-8947-0dacde3cec91&l=22179 > {code} > May 11 00:43:40 Caused by: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint > triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 > has not being executed at the moment. Aborting checkpoint. Failure reason: > Not all required tasks are currently running. > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152) > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114) > May 11 00:43:40 at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at akka.actor.Actor$class.aroundReceive(Actor.scala:517) > May 11 00:43:40 at > akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) > May 11 00:43:40 at > akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) > May 11 00:43:40 at akka.actor.ActorCell.invoke(ActorCell.scala:561) > May 11 00:43:40 at > akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) > May 11 00:43:40 at akka.dispatch.Mailbox.run(Mailbox.scala:225) > May 11 00:43:40 at akka.dispatch.Mailbox.exec(Mailbox.scala:235) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-22625) FileSinkMigrationITCase unstable
[ https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski closed FLINK-22625. -- Fix Version/s: 1.14.0 Resolution: Fixed Probable fix merged to master as 3ae6801f6e3 > FileSinkMigrationITCase unstable > > > Key: FLINK-22625 > URL: https://issues.apache.org/jira/browse/FLINK-22625 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Affects Versions: 1.14.0 >Reporter: Dawid Wysakowicz >Assignee: Anton Kalashnikov >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817&view=logs&j=5cae8624-c7eb-5c51-92d3-4d2dacedd221&t=420bd9ec-164e-562e-8947-0dacde3cec91&l=22179 > {code} > May 11 00:43:40 Caused by: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint > triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 > has not being executed at the moment. Aborting checkpoint. Failure reason: > Not all required tasks are currently running. > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152) > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114) > May 11 00:43:40 at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at akka.actor.Actor$class.aroundReceive(Actor.scala:517) > May 11 00:43:40 at > akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) > May 11 00:43:40 at > akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) > May 11 00:43:40 at akka.actor.ActorCell.invoke(ActorCell.scala:561) > May 11 00:43:40 at > akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) > May 11 00:43:40 at akka.dispatch.Mailbox.run(Mailbox.scala:225) > May 11 00:43:40 at akka.dispatch.Mailbox.exec(Mailbox.scala:235) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-22593) SavepointITCase.testShouldAddEntropyToSavepointPath unstable
[ https://issues.apache.org/jira/browse/FLINK-22593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski closed FLINK-22593. -- Resolution: Fixed Merged to master as 3fa69829517 > SavepointITCase.testShouldAddEntropyToSavepointPath unstable > > > Key: FLINK-22593 > URL: https://issues.apache.org/jira/browse/FLINK-22593 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.14.0 >Reporter: Robert Metzger >Assignee: Anton Kalashnikov >Priority: Blocker > Labels: pull-request-available, stale-blocker, stale-critical, > test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=9072&view=logs&j=cc649950-03e9-5fae-8326-2f1ad744b536&t=51cab6ca-669f-5dc0-221d-1e4f7dc4fc85 > {code} > 2021-05-07T10:56:20.9429367Z May 07 10:56:20 [ERROR] Tests run: 13, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 33.441 s <<< FAILURE! - in > org.apache.flink.test.checkpointing.SavepointITCase > 2021-05-07T10:56:20.9445862Z May 07 10:56:20 [ERROR] > testShouldAddEntropyToSavepointPath(org.apache.flink.test.checkpointing.SavepointITCase) > Time elapsed: 2.083 s <<< ERROR! > 2021-05-07T10:56:20.9447106Z May 07 10:56:20 > java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint > triggering task Sink: Unnamed (3/4) of job 4e155a20f0a7895043661a6446caf1cb > has not being executed at the moment. Aborting checkpoint. Failure reason: > Not all required tasks are currently running. > 2021-05-07T10:56:20.9448194Z May 07 10:56:20 at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2021-05-07T10:56:20.9448797Z May 07 10:56:20 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2021-05-07T10:56:20.9449428Z May 07 10:56:20 at > org.apache.flink.test.checkpointing.SavepointITCase.submitJobAndTakeSavepoint(SavepointITCase.java:305) > 2021-05-07T10:56:20.9450160Z May 07 10:56:20 at > org.apache.flink.test.checkpointing.SavepointITCase.testShouldAddEntropyToSavepointPath(SavepointITCase.java:273) > 2021-05-07T10:56:20.9450785Z May 07 10:56:20 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-05-07T10:56:20.9451331Z May 07 10:56:20 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-05-07T10:56:20.9451940Z May 07 10:56:20 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-05-07T10:56:20.9452498Z May 07 10:56:20 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-05-07T10:56:20.9453247Z May 07 10:56:20 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2021-05-07T10:56:20.9454007Z May 07 10:56:20 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2021-05-07T10:56:20.9454687Z May 07 10:56:20 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2021-05-07T10:56:20.9455302Z May 07 10:56:20 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2021-05-07T10:56:20.9455909Z May 07 10:56:20 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2021-05-07T10:56:20.9456493Z May 07 10:56:20 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2021-05-07T10:56:20.9457074Z May 07 10:56:20 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2021-05-07T10:56:20.9457636Z May 07 10:56:20 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2021-05-07T10:56:20.9458157Z May 07 10:56:20 at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2021-05-07T10:56:20.9458678Z May 07 10:56:20 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2021-05-07T10:56:20.9459252Z May 07 10:56:20 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2021-05-07T10:56:20.9459865Z May 07 10:56:20 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2021-05-07T10:56:20.9460433Z May 07 10:56:20 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2021-05-07T10:56:20.9461058Z May 07 10:56:20 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2021-05-07T10:56:20.9461607Z May 07 10:56:20 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2021-05-07T10:56:20.9462159Z May 07 10:56:20 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2021-05-07T10:56:20.9462705Z May 07 10:56:20 at > org.junit.runners.ParentRun
[jira] [Updated] (FLINK-23024) RPC result TaskManagerInfoWithSlots not serializable
[ https://issues.apache.org/jira/browse/FLINK-23024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yangze Guo updated FLINK-23024: --- Labels: pull-request-available (was: ) > RPC result TaskManagerInfoWithSlots not serializable > > > Key: FLINK-23024 > URL: https://issues.apache.org/jira/browse/FLINK-23024 > Project: Flink > Issue Type: Bug > Components: Runtime / REST >Affects Versions: 1.14.0, 1.13.1 >Reporter: Arvid Heise >Assignee: Yangze Guo >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0, 1.13.2 > > > A user reported the following stacktrace while accessing web UI. > {noformat} > Unhandled exception. > org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Failed > to serialize the result for RPC call : requestTaskManagerDetailsInfo. > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.serializeRemoteResultAndVerifySize(AkkaRpcActor.java:404) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$sendAsyncResponse$0(AkkaRpcActor.java:360) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836) > ~[?:1.8.0_251] at > java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:848) > ~[?:1.8.0_251] at > java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2168) > ~[?:1.8.0_251] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.sendAsyncResponse(AkkaRpcActor.java:352) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:319) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.actor.Actor$class.aroundReceive(Actor.scala:517) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.actor.ActorCell.invoke(ActorCell.scala:561) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.Mailbox.run(Mailbox.scala:225) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.Mailbox.exec(Mailbox.scala:235) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > [flink-dist_2.11-1.13.1.jar:1.13.1] Caused by: > java.io.NotSerializableException: > org.apache.flink.runtime.resourcemanager.TaskManagerInfoWithSlots at > java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184) > ~[?:1.8.0_251] at > java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348) > ~[?:1.8.0_251] at > org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:624) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcSerializedValue.valueOf(AkkaRpcSerializedValue.java:66) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.serializeRemoteResultAndVerifySize(AkkaRpcActor.java:387) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] ... 27 more > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18783) Load AkkaRpcService through separate class loader
[ https://issues.apache.org/jira/browse/FLINK-18783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler reassigned FLINK-18783: Assignee: Chesnay Schepler > Load AkkaRpcService through separate class loader > - > > Key: FLINK-18783 > URL: https://issues.apache.org/jira/browse/FLINK-18783 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.10.1, 1.11.1, 1.12.0 >Reporter: Till Rohrmann >Assignee: Chesnay Schepler >Priority: Minor > Labels: auto-deprioritized-major > > In order to reduce the runtime dependency on Scala and also to hide the Akka > dependency I suggest to load the AkkaRpcService and its dependencies through > a separate class loader similar to what we do with Flink's plugins. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18783) Load AkkaRpcService through separate class loader
[ https://issues.apache.org/jira/browse/FLINK-18783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler updated FLINK-18783: - Parent: FLINK-14105 Issue Type: Sub-task (was: Improvement) > Load AkkaRpcService through separate class loader > - > > Key: FLINK-18783 > URL: https://issues.apache.org/jira/browse/FLINK-18783 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Affects Versions: 1.10.1, 1.11.1, 1.12.0 >Reporter: Till Rohrmann >Assignee: Chesnay Schepler >Priority: Minor > Labels: auto-deprioritized-major > > In order to reduce the runtime dependency on Scala and also to hide the Akka > dependency I suggest to load the AkkaRpcService and its dependencies through > a separate class loader similar to what we do with Flink's plugins. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21952) Make all the "Connection reset by peer" exception wrapped as RemoteTransportException
[ https://issues.apache.org/jira/browse/FLINK-21952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-21952: Labels: (was: auto-deprioritized-major) > Make all the "Connection reset by peer" exception wrapped as > RemoteTransportException > - > > Key: FLINK-21952 > URL: https://issues.apache.org/jira/browse/FLINK-21952 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Affects Versions: 1.14.0, 1.13.1, 1.12.4 >Reporter: Yun Gao >Assignee: Yun Gao >Priority: Minor > > In CreditBasedPartitionRequestClientHandler#exceptionCaught, the IOException > or the exception with exact message "Connection reset by peer" are marked as > RemoteTransportException. > However, with the current Netty implementation, sometimes it might throw > {code:java} > org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer > {code} > in some case. It would be also wrapped as LocalTransportException, which > might cause some confusion. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21952) Make all the "Connection reset by peer" exception wrapped as RemoteTransportException
[ https://issues.apache.org/jira/browse/FLINK-21952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-21952: Labels: auto-deprioritized-major (was: auto-deprioritized-major stale-assigned) > Make all the "Connection reset by peer" exception wrapped as > RemoteTransportException > - > > Key: FLINK-21952 > URL: https://issues.apache.org/jira/browse/FLINK-21952 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Affects Versions: 1.14.0, 1.13.1, 1.12.4 >Reporter: Yun Gao >Assignee: Yun Gao >Priority: Minor > Labels: auto-deprioritized-major > > In CreditBasedPartitionRequestClientHandler#exceptionCaught, the IOException > or the exception with exact message "Connection reset by peer" are marked as > RemoteTransportException. > However, with the current Netty implementation, sometimes it might throw > {code:java} > org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer > {code} > in some case. It would be also wrapped as LocalTransportException, which > might cause some confusion. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23020) NullPointerException when running collect twice from Python API
[ https://issues.apache.org/jira/browse/FLINK-23020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365374#comment-17365374 ] Maciej Bryński commented on FLINK-23020: I'm not sure how kernel interrupt is working. It is possible there are multiple threads there. I think this is also a problem that interrupt should kill the underlying query without need of this try/except clause. > NullPointerException when running collect twice from Python API > --- > > Key: FLINK-23020 > URL: https://issues.apache.org/jira/browse/FLINK-23020 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.13.1 >Reporter: Maciej Bryński >Priority: Major > > Hi, > I'm trying to use PyFlink from Jupyter Notebook and I'm getting NPE in > following scenario. > 1. I'm creating datagen table. > {code:java} > from pyflink.table import EnvironmentSettings, TableEnvironment, > StreamTableEnvironment, DataTypes > from pyflink.table.udf import udf > from pyflink.common import Configuration, Row > from pyflink.datastream import StreamExecutionEnvironment > from pyflink.java_gateway import get_gateway > conf = Configuration() > env_settings = > EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build() > table_env = StreamTableEnvironment.create(environment_settings=env_settings) > table_env.get_config().get_configuration().set_integer("parallelism.default", > 1) > table_env.execute_sql("DROP TABLE IF EXISTS datagen") > table_env.execute_sql(""" > CREATE TABLE datagen ( > id INT > ) WITH ( > 'connector' = 'datagen' > ) > """) > {code} > 2. Then I'm running collect > {code:java} > try: > result = table_env.sql_query("select * from datagen limit 1").execute() > for r in result.collect(): > print(r) > except KeyboardInterrupt: > result.get_job_client().cancel() > {code} > 3. I'm using "interrupt the kernel" button. This is handled by above > try/except and will cancel the query. > 4. I'm running collect from point 2 one more time. Result: > {code:java} > --- > Py4JJavaError Traceback (most recent call last) > in > 1 try: > > 2 result = table_env.sql_query("select * from datagen limit > 1").execute() > 3 for r in result.collect(): > 4 print(r) > 5 except KeyboardInterrupt: > /usr/local/lib/python3.8/dist-packages/pyflink/table/table.py in execute(self) >1070 """ >1071 self._t_env._before_execute() > -> 1072 return TableResult(self._j_table.execute()) >1073 >1074 def explain(self, *extra_details: ExplainDetail) -> str: > /usr/local/lib/python3.8/dist-packages/py4j/java_gateway.py in __call__(self, > *args) >1283 >1284 answer = self.gateway_client.send_command(command) > -> 1285 return_value = get_return_value( >1286 answer, self.gateway_client, self.target_id, self.name) >1287 > /usr/local/lib/python3.8/dist-packages/pyflink/util/exceptions.py in deco(*a, > **kw) > 144 def deco(*a, **kw): > 145 try: > --> 146 return f(*a, **kw) > 147 except Py4JJavaError as e: > 148 from pyflink.java_gateway import get_gateway > /usr/local/lib/python3.8/dist-packages/py4j/protocol.py in > get_return_value(answer, gateway_client, target_id, name) > 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) > 325 if answer[1] == REFERENCE_TYPE: > --> 326 raise Py4JJavaError( > 327 "An error occurred while calling {0}{1}{2}.\n". > 328 format(target_id, ".", name), value) > Py4JJavaError: An error occurred while calling o69.execute. > : java.lang.NullPointerException > at java.base/java.util.Objects.requireNonNull(Objects.java:221) > at > org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:144) > at > org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:108) > at > org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.(FlinkRelMetadataQuery.java:73) > at > org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.instance(FlinkRelMetadataQuery.java:54) > at > org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:39) > at > org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:38) > at > org.apache.calcite.plan.RelOptCluster.getMetadataQuery(RelOptCluster.java:178) > at > org.apache.calcite.rel.metadata.RelMdUtil.clea
[jira] [Updated] (FLINK-22518) Translate the page of "High Availability (HA)" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-22518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Tang updated FLINK-22518: - Fix Version/s: 1.14.0 > Translate the page of "High Availability (HA)" into Chinese > --- > > Key: FLINK-22518 > URL: https://issues.apache.org/jira/browse/FLINK-22518 > Project: Flink > Issue Type: New Feature > Components: chinese-translation >Reporter: movesan Yang >Assignee: movesan Yang >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0 > > > > The model of "High Availability (HA)" contains the following three pages: > [https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha] > [https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha/zookeeper_ha.html|https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha/zookeeper_ha.html,] > [https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha/kubernetes_ha.html] > The markdown file can be found in > [https://github.com/apache/flink/tree/master/docs/content.zh/docs/deployment/ha] > in English. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22518) Translate the page of "High Availability (HA)" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-22518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365377#comment-17365377 ] Yun Tang commented on FLINK-22518: -- Merged master: fc73b3fe99951c91459842cad60236c537cd1517 > Translate the page of "High Availability (HA)" into Chinese > --- > > Key: FLINK-22518 > URL: https://issues.apache.org/jira/browse/FLINK-22518 > Project: Flink > Issue Type: New Feature > Components: chinese-translation >Reporter: movesan Yang >Assignee: movesan Yang >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0 > > > > The model of "High Availability (HA)" contains the following three pages: > [https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha] > [https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha/zookeeper_ha.html|https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha/zookeeper_ha.html,] > [https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha/kubernetes_ha.html] > The markdown file can be found in > [https://github.com/apache/flink/tree/master/docs/content.zh/docs/deployment/ha] > in English. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-23020) NullPointerException when running collect twice from Python API
[ https://issues.apache.org/jira/browse/FLINK-23020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365374#comment-17365374 ] Maciej Bryński edited comment on FLINK-23020 at 6/18/21, 9:01 AM: -- I'm not sure how kernel interrupt is working. It is possible there are multiple threads there. I think this is also a problem that interrupt should kill the underlying query without need of this try/except clause. But right know interrupt leaves query running and gives following stacktrace. {code:java} --- KeyboardInterrupt Traceback (most recent call last) in 1 result = table_env.sql_query("select * from datagen limit 1").execute() > 2 for r in result.collect(): 3 print(r) /usr/local/lib/python3.8/dist-packages/pyflink/table/table_result.py in __next__(self) 234 235 def __next__(self): --> 236 if not self._j_closeable_iterator.hasNext(): 237 raise StopIteration("No more data.") 238 gateway = get_gateway() /usr/local/lib/python3.8/dist-packages/py4j/java_gateway.py in __call__(self, *args) 1282 proto.END_COMMAND_PART 1283 -> 1284 answer = self.gateway_client.send_command(command) 1285 return_value = get_return_value( 1286 answer, self.gateway_client, self.target_id, self.name) /usr/local/lib/python3.8/dist-packages/py4j/java_gateway.py in send_command(self, command, retry, binary) 1012 connection = self._get_connection() 1013 try: -> 1014 response = connection.send_command(command) 1015 if binary: 1016 return response, self._create_connection_guard(connection) /usr/local/lib/python3.8/dist-packages/py4j/java_gateway.py in send_command(self, command) 1179 1180 try: -> 1181 answer = smart_decode(self.stream.readline()[:-1]) 1182 logger.debug("Answer received: {0}".format(answer)) 1183 if answer.startswith(proto.RETURN_MESSAGE): /usr/lib/python3.8/socket.py in readinto(self, b) 667 while True: 668 try: --> 669 return self._sock.recv_into(b) 670 except timeout: 671 self._timeout_occurred = True KeyboardInterrupt: {code} was (Author: maver1ck): I'm not sure how kernel interrupt is working. It is possible there are multiple threads there. I think this is also a problem that interrupt should kill the underlying query without need of this try/except clause. > NullPointerException when running collect twice from Python API > --- > > Key: FLINK-23020 > URL: https://issues.apache.org/jira/browse/FLINK-23020 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.13.1 >Reporter: Maciej Bryński >Priority: Major > > Hi, > I'm trying to use PyFlink from Jupyter Notebook and I'm getting NPE in > following scenario. > 1. I'm creating datagen table. > {code:java} > from pyflink.table import EnvironmentSettings, TableEnvironment, > StreamTableEnvironment, DataTypes > from pyflink.table.udf import udf > from pyflink.common import Configuration, Row > from pyflink.datastream import StreamExecutionEnvironment > from pyflink.java_gateway import get_gateway > conf = Configuration() > env_settings = > EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build() > table_env = StreamTableEnvironment.create(environment_settings=env_settings) > table_env.get_config().get_configuration().set_integer("parallelism.default", > 1) > table_env.execute_sql("DROP TABLE IF EXISTS datagen") > table_env.execute_sql(""" > CREATE TABLE datagen ( > id INT > ) WITH ( > 'connector' = 'datagen' > ) > """) > {code} > 2. Then I'm running collect > {code:java} > try: > result = table_env.sql_query("select * from datagen limit 1").execute() > for r in result.collect(): > print(r) > except KeyboardInterrupt: > result.get_job_client().cancel() > {code} > 3. I'm using "interrupt the kernel" button. This is handled by above > try/except and will cancel the query. > 4. I'm running collect from point 2 one more time. Result: > {code:java} > --- > Py4JJavaError Traceback (most recent call last) > in > 1 try: > > 2 result = table_env.sql_query("select * from datagen limit > 1").execute() > 3 for r in result.collect(): > 4 print(r) > 5 except KeyboardInterrupt: > /usr/local/lib/python3.8/dist-packages/pyflink/table/table.py in execute(self)
[jira] [Created] (FLINK-23032) Refactor HiveSource to make it usable in data stream job
Rui Li created FLINK-23032: -- Summary: Refactor HiveSource to make it usable in data stream job Key: FLINK-23032 URL: https://issues.apache.org/jira/browse/FLINK-23032 Project: Flink Issue Type: Improvement Components: Connectors / Hive Reporter: Rui Li Assignee: Rui Li Fix For: 1.14.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22967) CodeGenException: Unsupported cast for nested Decimals
[ https://issues.apache.org/jira/browse/FLINK-22967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365381#comment-17365381 ] Maciej Bryński commented on FLINK-22967: I'm closing this as fixed in 1.13.1 > CodeGenException: Unsupported cast for nested Decimals > -- > > Key: FLINK-22967 > URL: https://issues.apache.org/jira/browse/FLINK-22967 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.12.2 >Reporter: Maciej Bryński >Priority: Major > > This query is failing in SQL Client > {code:java} > Flink SQL> CREATE TABLE abc ( > > test ROW>> > > ) WITH ( 'connector' = 'datagen'); > [INFO] Table has been created. > Flink SQL> select * from abc; > [ERROR] Could not execute SQL statement. Reason: > org.apache.flink.table.planner.codegen.CodeGenException: Unsupported cast > from 'ROW<`a` ARRAY>>' to 'ROW<`a` > ARRAY>>'. > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22967) CodeGenException: Unsupported cast for nested Decimals
[ https://issues.apache.org/jira/browse/FLINK-22967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maciej Bryński updated FLINK-22967: --- Fix Version/s: 1.13.1 > CodeGenException: Unsupported cast for nested Decimals > -- > > Key: FLINK-22967 > URL: https://issues.apache.org/jira/browse/FLINK-22967 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.12.2 >Reporter: Maciej Bryński >Priority: Major > Fix For: 1.13.1 > > > This query is failing in SQL Client > {code:java} > Flink SQL> CREATE TABLE abc ( > > test ROW>> > > ) WITH ( 'connector' = 'datagen'); > [INFO] Table has been created. > Flink SQL> select * from abc; > [ERROR] Could not execute SQL statement. Reason: > org.apache.flink.table.planner.codegen.CodeGenException: Unsupported cast > from 'ROW<`a` ARRAY>>' to 'ROW<`a` > ARRAY>>'. > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-22967) CodeGenException: Unsupported cast for nested Decimals
[ https://issues.apache.org/jira/browse/FLINK-22967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maciej Bryński closed FLINK-22967. -- Resolution: Fixed > CodeGenException: Unsupported cast for nested Decimals > -- > > Key: FLINK-22967 > URL: https://issues.apache.org/jira/browse/FLINK-22967 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.12.2 >Reporter: Maciej Bryński >Priority: Major > Fix For: 1.13.1 > > > This query is failing in SQL Client > {code:java} > Flink SQL> CREATE TABLE abc ( > > test ROW>> > > ) WITH ( 'connector' = 'datagen'); > [INFO] Table has been created. > Flink SQL> select * from abc; > [ERROR] Could not execute SQL statement. Reason: > org.apache.flink.table.planner.codegen.CodeGenException: Unsupported cast > from 'ROW<`a` ARRAY>>' to 'ROW<`a` > ARRAY>>'. > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-22955) lookup join filter push down result to mismatch function signature
[ https://issues.apache.org/jira/browse/FLINK-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365382#comment-17365382 ] Cooper Luan edited comment on FLINK-22955 at 6/18/21, 9:05 AM: --- actually it's not possible for us to implement more eval method is there a rule in PHYSICAL_OPT_RULES I can disable? so the filer won't push down to lookup table [~qingru zhang] was (Author: gsavl): actually it's not possible for us to implement more eval method is there a rule in PHYSICAL_OPT_RULES I can disable? so the filer won't push down to lookup table > lookup join filter push down result to mismatch function signature > -- > > Key: FLINK-22955 > URL: https://issues.apache.org/jira/browse/FLINK-22955 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.3, 1.13.1, 1.12.4 > Environment: Flink 1.13.1 > how to reproduce: patch file attached >Reporter: Cooper Luan >Priority: Critical > Fix For: 1.11.4, 1.12.5, 1.13.2 > > Attachments: > 0001-try-to-produce-lookup-join-filter-pushdown-expensive.patch > > > a sql like this may result to look function signature mismatch exception when > explain sql > {code:sql} > CREATE TEMPORARY VIEW v_vvv AS > SELECT * FROM MyTable AS T > JOIN LookupTableAsync1 FOR SYSTEM_TIME AS OF T.proctime AS D > ON T.a = D.id; > SELECT a,b,id,name > FROM v_vvv > WHERE age = 10;{code} > the lookup function is > {code:scala} > class AsyncTableFunction1 extends AsyncTableFunction[RowData] { > def eval(resultFuture: CompletableFuture[JCollection[RowData]], a: > Integer): Unit = { > } > }{code} > exec plan is > {code:java} > LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], > fields=[a, b, id, name]) > +- LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], > joinType=[InnerJoin], async=[true], lookup=[age=10, id=a], where=[(age = > 10)], select=[a, b, id, name]) >+- Calc(select=[a, b]) > +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], > fields=[a, b, c, proctime, rowtime]) > {code} > the "lookup=[age=10, id=a]" result to mismatch signature mismatch > > but if I add 1 more insert, it works well > {code:sql} > SELECT a,b,id,name > FROM v_vvv > WHERE age = 30 > {code} > exec plan is > {code:java} > == Optimized Execution Plan == > LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], > joinType=[InnerJoin], async=[true], lookup=[id=a], select=[a, b, c, proctime, > rowtime, id, name, age, ts])(reuse_id=[1]) > +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], > fields=[a, b, c, proctime, > rowtime])LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], > fields=[a, b, id, name]) > +- Calc(select=[a, b, id, name], where=[(age = 10)]) >+- > Reused(reference_id=[1])LegacySink(name=[`default_catalog`.`default_database`.`appendSink2`], > fields=[a, b, id, name]) > +- Calc(select=[a, b, id, name], where=[(age = 30)]) >+- Reused(reference_id=[1]) > {code} > the LookupJoin node use "lookup=[id=a]"(right) not "lookup=[age=10, id=a]" > (wrong) > > so, in "multi insert" case, planner works great > in "single insert" case, planner throw exception -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22955) lookup join filter push down result to mismatch function signature
[ https://issues.apache.org/jira/browse/FLINK-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365382#comment-17365382 ] Cooper Luan commented on FLINK-22955: - actually it's not possible for us to implement more eval method is there a rule in PHYSICAL_OPT_RULES I can disable? so the filer won't push down to lookup table > lookup join filter push down result to mismatch function signature > -- > > Key: FLINK-22955 > URL: https://issues.apache.org/jira/browse/FLINK-22955 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.3, 1.13.1, 1.12.4 > Environment: Flink 1.13.1 > how to reproduce: patch file attached >Reporter: Cooper Luan >Priority: Critical > Fix For: 1.11.4, 1.12.5, 1.13.2 > > Attachments: > 0001-try-to-produce-lookup-join-filter-pushdown-expensive.patch > > > a sql like this may result to look function signature mismatch exception when > explain sql > {code:sql} > CREATE TEMPORARY VIEW v_vvv AS > SELECT * FROM MyTable AS T > JOIN LookupTableAsync1 FOR SYSTEM_TIME AS OF T.proctime AS D > ON T.a = D.id; > SELECT a,b,id,name > FROM v_vvv > WHERE age = 10;{code} > the lookup function is > {code:scala} > class AsyncTableFunction1 extends AsyncTableFunction[RowData] { > def eval(resultFuture: CompletableFuture[JCollection[RowData]], a: > Integer): Unit = { > } > }{code} > exec plan is > {code:java} > LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], > fields=[a, b, id, name]) > +- LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], > joinType=[InnerJoin], async=[true], lookup=[age=10, id=a], where=[(age = > 10)], select=[a, b, id, name]) >+- Calc(select=[a, b]) > +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], > fields=[a, b, c, proctime, rowtime]) > {code} > the "lookup=[age=10, id=a]" result to mismatch signature mismatch > > but if I add 1 more insert, it works well > {code:sql} > SELECT a,b,id,name > FROM v_vvv > WHERE age = 30 > {code} > exec plan is > {code:java} > == Optimized Execution Plan == > LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], > joinType=[InnerJoin], async=[true], lookup=[id=a], select=[a, b, c, proctime, > rowtime, id, name, age, ts])(reuse_id=[1]) > +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], > fields=[a, b, c, proctime, > rowtime])LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], > fields=[a, b, id, name]) > +- Calc(select=[a, b, id, name], where=[(age = 10)]) >+- > Reused(reference_id=[1])LegacySink(name=[`default_catalog`.`default_database`.`appendSink2`], > fields=[a, b, id, name]) > +- Calc(select=[a, b, id, name], where=[(age = 30)]) >+- Reused(reference_id=[1]) > {code} > the LookupJoin node use "lookup=[id=a]"(right) not "lookup=[age=10, id=a]" > (wrong) > > so, in "multi insert" case, planner works great > in "single insert" case, planner throw exception -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (FLINK-22518) Translate the page of "High Availability (HA)" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-22518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Tang resolved FLINK-22518. -- Resolution: Fixed > Translate the page of "High Availability (HA)" into Chinese > --- > > Key: FLINK-22518 > URL: https://issues.apache.org/jira/browse/FLINK-22518 > Project: Flink > Issue Type: New Feature > Components: chinese-translation >Reporter: movesan Yang >Assignee: movesan Yang >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0 > > > > The model of "High Availability (HA)" contains the following three pages: > [https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha] > [https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha/zookeeper_ha.html|https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha/zookeeper_ha.html,] > [https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/deployment/ha/kubernetes_ha.html] > The markdown file can be found in > [https://github.com/apache/flink/tree/master/docs/content.zh/docs/deployment/ha] > in English. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22625) FileSinkMigrationITCase unstable
[ https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365384#comment-17365384 ] Yun Gao commented on FLINK-22625: - Hi [~akalashnikov], does the original issue happens due to a single sink task might count down `savepointLatch()` multiple times ? Very thanks for the fix and it also looks good to me, and sorry for the late reply. > FileSinkMigrationITCase unstable > > > Key: FLINK-22625 > URL: https://issues.apache.org/jira/browse/FLINK-22625 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Affects Versions: 1.14.0 >Reporter: Dawid Wysakowicz >Assignee: Anton Kalashnikov >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817&view=logs&j=5cae8624-c7eb-5c51-92d3-4d2dacedd221&t=420bd9ec-164e-562e-8947-0dacde3cec91&l=22179 > {code} > May 11 00:43:40 Caused by: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint > triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 > has not being executed at the moment. Aborting checkpoint. Failure reason: > Not all required tasks are currently running. > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152) > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114) > May 11 00:43:40 at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at akka.actor.Actor$class.aroundReceive(Actor.scala:517) > May 11 00:43:40 at > akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) > May 11 00:43:40 at > akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) > May 11 00:43:40 at akka.actor.ActorCell.invoke(ActorCell.scala:561) > May 11 00:43:40 at > akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) > May 11 00:43:40 at akka.dispatch.Mailbox.run(Mailbox.scala:225) > May 11 00:43:40 at akka.dispatch.Mailbox.exec(Mailbox.scala:235) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-22625) FileSinkMigrationITCase unstable
[ https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365384#comment-17365384 ] Yun Gao edited comment on FLINK-22625 at 6/18/21, 9:09 AM: --- Hi [~akalashnikov], does the original issue happens due to a single sink task might count down `savepointLatch` multiple times ? Very thanks for the fix and it also looks good to me, and sorry for the late reply. was (Author: gaoyunhaii): Hi [~akalashnikov], does the original issue happens due to a single sink task might count down `savepointLatch()` multiple times ? Very thanks for the fix and it also looks good to me, and sorry for the late reply. > FileSinkMigrationITCase unstable > > > Key: FLINK-22625 > URL: https://issues.apache.org/jira/browse/FLINK-22625 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Affects Versions: 1.14.0 >Reporter: Dawid Wysakowicz >Assignee: Anton Kalashnikov >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817&view=logs&j=5cae8624-c7eb-5c51-92d3-4d2dacedd221&t=420bd9ec-164e-562e-8947-0dacde3cec91&l=22179 > {code} > May 11 00:43:40 Caused by: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint > triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 > has not being executed at the moment. Aborting checkpoint. Failure reason: > Not all required tasks are currently running. > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152) > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114) > May 11 00:43:40 at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at akka.actor.Actor$class.aroundReceive(Actor.scala:517) > May 11 00:43:40 at > akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) > May 11 00:43:40 at > akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) > May 11 00:43:40 at akka.actor.ActorCell.invoke(ActorCell.scala:561) > May 11 00:43:40 at > akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) > May 11 00:43:40 at akka.dispatch.Mailbox.run(Mailbox.scala:225) > May 11 00:43:40 at akka.dispatch.Mailbox.exec(Mailbox.scala:235) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-22955) lookup join filter push down result to mismatch function signature
[ https://issues.apache.org/jira/browse/FLINK-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365382#comment-17365382 ] Cooper Luan edited comment on FLINK-22955 at 6/18/21, 9:10 AM: --- actually it's not possible for us to implement more eval method is there a rule in PHYSICAL_OPT_RULES I can disable (as a workaround)? so the filer won't push down to lookup table [~qingru zhang] was (Author: gsavl): actually it's not possible for us to implement more eval method is there a rule in PHYSICAL_OPT_RULES I can disable? so the filer won't push down to lookup table [~qingru zhang] > lookup join filter push down result to mismatch function signature > -- > > Key: FLINK-22955 > URL: https://issues.apache.org/jira/browse/FLINK-22955 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.3, 1.13.1, 1.12.4 > Environment: Flink 1.13.1 > how to reproduce: patch file attached >Reporter: Cooper Luan >Priority: Critical > Fix For: 1.11.4, 1.12.5, 1.13.2 > > Attachments: > 0001-try-to-produce-lookup-join-filter-pushdown-expensive.patch > > > a sql like this may result to look function signature mismatch exception when > explain sql > {code:sql} > CREATE TEMPORARY VIEW v_vvv AS > SELECT * FROM MyTable AS T > JOIN LookupTableAsync1 FOR SYSTEM_TIME AS OF T.proctime AS D > ON T.a = D.id; > SELECT a,b,id,name > FROM v_vvv > WHERE age = 10;{code} > the lookup function is > {code:scala} > class AsyncTableFunction1 extends AsyncTableFunction[RowData] { > def eval(resultFuture: CompletableFuture[JCollection[RowData]], a: > Integer): Unit = { > } > }{code} > exec plan is > {code:java} > LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], > fields=[a, b, id, name]) > +- LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], > joinType=[InnerJoin], async=[true], lookup=[age=10, id=a], where=[(age = > 10)], select=[a, b, id, name]) >+- Calc(select=[a, b]) > +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], > fields=[a, b, c, proctime, rowtime]) > {code} > the "lookup=[age=10, id=a]" result to mismatch signature mismatch > > but if I add 1 more insert, it works well > {code:sql} > SELECT a,b,id,name > FROM v_vvv > WHERE age = 30 > {code} > exec plan is > {code:java} > == Optimized Execution Plan == > LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], > joinType=[InnerJoin], async=[true], lookup=[id=a], select=[a, b, c, proctime, > rowtime, id, name, age, ts])(reuse_id=[1]) > +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], > fields=[a, b, c, proctime, > rowtime])LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], > fields=[a, b, id, name]) > +- Calc(select=[a, b, id, name], where=[(age = 10)]) >+- > Reused(reference_id=[1])LegacySink(name=[`default_catalog`.`default_database`.`appendSink2`], > fields=[a, b, id, name]) > +- Calc(select=[a, b, id, name], where=[(age = 30)]) >+- Reused(reference_id=[1]) > {code} > the LookupJoin node use "lookup=[id=a]"(right) not "lookup=[age=10, id=a]" > (wrong) > > so, in "multi insert" case, planner works great > in "single insert" case, planner throw exception -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-15752) Backpressure stats sometimes broken in WebUI
[ https://issues.apache.org/jira/browse/FLINK-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski closed FLINK-15752. -- Resolution: Duplicate I have independently discovered this issue and fixed it sometime ago as FLINK-22489 > Backpressure stats sometimes broken in WebUI > > > Key: FLINK-15752 > URL: https://issues.apache.org/jira/browse/FLINK-15752 > Project: Flink > Issue Type: Bug > Components: Runtime / Web Frontend >Affects Versions: 1.10.0 >Reporter: Nico Kruber >Priority: Minor > Labels: auto-deprioritized-major > Attachments: backpressure-stats.png > > > The backpressure monitor shows two values: ratio and status. It looks like > they are not always in sync. See below (for low ratios): > !backpressure-stats.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22625) FileSinkMigrationITCase unstable
[ https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365389#comment-17365389 ] Anton Kalashnikov commented on FLINK-22625: --- ??a single sink task might count down `savepointLatch` multiple times?? Yes, it was exactly the reason. > FileSinkMigrationITCase unstable > > > Key: FLINK-22625 > URL: https://issues.apache.org/jira/browse/FLINK-22625 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Affects Versions: 1.14.0 >Reporter: Dawid Wysakowicz >Assignee: Anton Kalashnikov >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817&view=logs&j=5cae8624-c7eb-5c51-92d3-4d2dacedd221&t=420bd9ec-164e-562e-8947-0dacde3cec91&l=22179 > {code} > May 11 00:43:40 Caused by: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint > triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 > has not being executed at the moment. Aborting checkpoint. Failure reason: > Not all required tasks are currently running. > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152) > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114) > May 11 00:43:40 at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at akka.actor.Actor$class.aroundReceive(Actor.scala:517) > May 11 00:43:40 at > akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) > May 11 00:43:40 at > akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) > May 11 00:43:40 at akka.actor.ActorCell.invoke(ActorCell.scala:561) > May 11 00:43:40 at > akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) > May 11 00:43:40 at akka.dispatch.Mailbox.run(Mailbox.scala:225) > May 11 00:43:40 at akka.dispatch.Mailbox.exec(Mailbox.scala:235) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-15752) Backpressure stats sometimes broken in WebUI
[ https://issues.apache.org/jira/browse/FLINK-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365386#comment-17365386 ] Piotr Nowojski edited comment on FLINK-15752 at 6/18/21, 9:15 AM: -- I have independently discovered this issue and fixed it sometime ago as FLINK-22489 It wasn't a problem with REST API, but a trivial typo in the WebUI itself. was (Author: pnowojski): I have independently discovered this issue and fixed it sometime ago as FLINK-22489 > Backpressure stats sometimes broken in WebUI > > > Key: FLINK-15752 > URL: https://issues.apache.org/jira/browse/FLINK-15752 > Project: Flink > Issue Type: Bug > Components: Runtime / Web Frontend >Affects Versions: 1.10.0 >Reporter: Nico Kruber >Priority: Minor > Labels: auto-deprioritized-major > Attachments: backpressure-stats.png > > > The backpressure monitor shows two values: ratio and status. It looks like > they are not always in sync. See below (for low ratios): > !backpressure-stats.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22489) subtask backpressure indicator shows value for entire job
[ https://issues.apache.org/jira/browse/FLINK-22489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski updated FLINK-22489: --- Description: In the backpressure tab of the web UI, the OK/LOW/HIGH indication is displaying the job-level backpressure for every subtask, rather than the individual subtask values (effectively showing max back pressure from all of the subtasks of the given task for every subtask, instead of the individual values). !backPressureTab.png! was: In the backpressure tab of the web UI, the OK/LOW/HIGH indication is displaying the job-level backpressure for every subtask, rather than the individual subtask values. !backPressureTab.png! > subtask backpressure indicator shows value for entire job > - > > Key: FLINK-22489 > URL: https://issues.apache.org/jira/browse/FLINK-22489 > Project: Flink > Issue Type: Bug > Components: Runtime / Web Frontend >Affects Versions: 1.9.3, 1.10.3, 1.11.3, 1.12.2, 1.13.0 >Reporter: David Anderson >Assignee: Piotr Nowojski >Priority: Major > Labels: pull-request-available > Fix For: 1.11.4, 1.14.0, 1.13.1, 1.12.4 > > Attachments: backPressureTab.png > > > In the backpressure tab of the web UI, the OK/LOW/HIGH indication is > displaying the job-level backpressure for every subtask, rather than the > individual subtask values (effectively showing max back pressure from all of > the subtasks of the given task for every subtask, instead of the individual > values). > !backPressureTab.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-22955) lookup join filter push down result to mismatch function signature
[ https://issues.apache.org/jira/browse/FLINK-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365382#comment-17365382 ] Cooper Luan edited comment on FLINK-22955 at 6/18/21, 9:16 AM: --- actually it's not possible for us to implement more eval method(it's not a once for all work) is there a rule in PHYSICAL_OPT_RULES I can disable (as a workaround)? so the filer won't push down to lookup table [~qingru zhang] was (Author: gsavl): actually it's not possible for us to implement more eval method is there a rule in PHYSICAL_OPT_RULES I can disable (as a workaround)? so the filer won't push down to lookup table [~qingru zhang] > lookup join filter push down result to mismatch function signature > -- > > Key: FLINK-22955 > URL: https://issues.apache.org/jira/browse/FLINK-22955 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.3, 1.13.1, 1.12.4 > Environment: Flink 1.13.1 > how to reproduce: patch file attached >Reporter: Cooper Luan >Priority: Critical > Fix For: 1.11.4, 1.12.5, 1.13.2 > > Attachments: > 0001-try-to-produce-lookup-join-filter-pushdown-expensive.patch > > > a sql like this may result to look function signature mismatch exception when > explain sql > {code:sql} > CREATE TEMPORARY VIEW v_vvv AS > SELECT * FROM MyTable AS T > JOIN LookupTableAsync1 FOR SYSTEM_TIME AS OF T.proctime AS D > ON T.a = D.id; > SELECT a,b,id,name > FROM v_vvv > WHERE age = 10;{code} > the lookup function is > {code:scala} > class AsyncTableFunction1 extends AsyncTableFunction[RowData] { > def eval(resultFuture: CompletableFuture[JCollection[RowData]], a: > Integer): Unit = { > } > }{code} > exec plan is > {code:java} > LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], > fields=[a, b, id, name]) > +- LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], > joinType=[InnerJoin], async=[true], lookup=[age=10, id=a], where=[(age = > 10)], select=[a, b, id, name]) >+- Calc(select=[a, b]) > +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], > fields=[a, b, c, proctime, rowtime]) > {code} > the "lookup=[age=10, id=a]" result to mismatch signature mismatch > > but if I add 1 more insert, it works well > {code:sql} > SELECT a,b,id,name > FROM v_vvv > WHERE age = 30 > {code} > exec plan is > {code:java} > == Optimized Execution Plan == > LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], > joinType=[InnerJoin], async=[true], lookup=[id=a], select=[a, b, c, proctime, > rowtime, id, name, age, ts])(reuse_id=[1]) > +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], > fields=[a, b, c, proctime, > rowtime])LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], > fields=[a, b, id, name]) > +- Calc(select=[a, b, id, name], where=[(age = 10)]) >+- > Reused(reference_id=[1])LegacySink(name=[`default_catalog`.`default_database`.`appendSink2`], > fields=[a, b, id, name]) > +- Calc(select=[a, b, id, name], where=[(age = 30)]) >+- Reused(reference_id=[1]) > {code} > the LookupJoin node use "lookup=[id=a]"(right) not "lookup=[age=10, id=a]" > (wrong) > > so, in "multi insert" case, planner works great > in "single insert" case, planner throw exception -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23009) Bump up Guava in Kinesis Connector
[ https://issues.apache.org/jira/browse/FLINK-23009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365391#comment-17365391 ] Emre Kartoglu commented on FLINK-23009: --- release 1.12: https://github.com/apache/flink/pull/16187 release 1.13: https://github.com/apache/flink/pull/16195 release 1.14: https://github.com/apache/flink/pull/16196 > Bump up Guava in Kinesis Connector > -- > > Key: FLINK-23009 > URL: https://issues.apache.org/jira/browse/FLINK-23009 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kinesis >Affects Versions: 1.14.0, 1.12.5, 1.13.2 >Reporter: Emre Kartoglu >Assignee: Emre Kartoglu >Priority: Major > > *Background* > We maintain a copy of the Flink connector in our AWS GitHub group: > [https://github.com/awslabs/amazon-kinesis-connector-flink] > We've recently upgraded the Guava library in our AWS copy as the version we > were using was quite old and had incompatible interface differences with the > later and more commonly used Guava versions. As part of this ticket we'll be > applying the same changes in the Flink repo > [https://github.com/apache/flink/tree/master/flink-connectors/flink-connector-kinesis] > > *Scope* > * Upgrade Guava library in pom.xml > * Switch to 3-arg version of Guava Futures.addCallback method call, as the > old 2-arg version is no longer supported > *Result* > All existing and new tests should pass > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22625) FileSinkMigrationITCase unstable
[ https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365390#comment-17365390 ] Yun Gao commented on FLINK-22625: - Got it, very thanks for the fix and the explanation~ > FileSinkMigrationITCase unstable > > > Key: FLINK-22625 > URL: https://issues.apache.org/jira/browse/FLINK-22625 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Affects Versions: 1.14.0 >Reporter: Dawid Wysakowicz >Assignee: Anton Kalashnikov >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817&view=logs&j=5cae8624-c7eb-5c51-92d3-4d2dacedd221&t=420bd9ec-164e-562e-8947-0dacde3cec91&l=22179 > {code} > May 11 00:43:40 Caused by: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint > triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 > has not being executed at the moment. Aborting checkpoint. Failure reason: > Not all required tasks are currently running. > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152) > May 11 00:43:40 at > org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114) > May 11 00:43:40 at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) > May 11 00:43:40 at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > May 11 00:43:40 at > akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > May 11 00:43:40 at akka.actor.Actor$class.aroundReceive(Actor.scala:517) > May 11 00:43:40 at > akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) > May 11 00:43:40 at > akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) > May 11 00:43:40 at akka.actor.ActorCell.invoke(ActorCell.scala:561) > May 11 00:43:40 at > akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) > May 11 00:43:40 at akka.dispatch.Mailbox.run(Mailbox.scala:225) > May 11 00:43:40 at akka.dispatch.Mailbox.exec(Mailbox.scala:235) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > May 11 00:43:40 at > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22995) Failed to acquire lease 'ConfigMapLock: .... retrying...
[ https://issues.apache.org/jira/browse/FLINK-22995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365402#comment-17365402 ] Till Rohrmann commented on FLINK-22995: --- Could it then be that the lease did not time out yet? Have you tried choosing a different {{kubernetes.cluster-id}}? If yes, did the same problem arise? > Failed to acquire lease 'ConfigMapLock: retrying... > > > Key: FLINK-22995 > URL: https://issues.apache.org/jira/browse/FLINK-22995 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Affects Versions: 1.13.1 >Reporter: Bhagi >Priority: Major > Attachments: jobamanger.log > > > Hi Team, > I have deployed Flink session cluster on standalone k8s with Jobmanager HA > (k8s HA service). > when i am submitting the jobs all jobs are failed. due to jobmanager leader > election issue. > attaching the logs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22662) YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint fail
[ https://issues.apache.org/jira/browse/FLINK-22662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-22662: - Affects Version/s: 1.14.0 > YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint fail > > > Key: FLINK-22662 > URL: https://issues.apache.org/jira/browse/FLINK-22662 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.13.0, 1.14.0 >Reporter: Guowei Ma >Assignee: Xintong Song >Priority: Major > Labels: test-stability > > {code:java} > 2021-05-14T00:24:57.8487649Z May 14 00:24:57 [ERROR] > testKillYarnSessionClusterEntrypoint(org.apache.flink.yarn.YARNHighAvailabilityITCase) > Time elapsed: 34.667 s <<< ERROR! > 2021-05-14T00:24:57.8488567Z May 14 00:24:57 > java.util.concurrent.ExecutionException: > 2021-05-14T00:24:57.8489301Z May 14 00:24:57 > org.apache.flink.runtime.rest.util.RestClientException: > [org.apache.flink.runtime.rest.handler.RestHandlerException: > org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find > Flink job (610ed4b159ece04c8ee2ec40e7d0c143) > 2021-05-14T00:24:57.8493142Z May 14 00:24:57 at > org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.propagateException(JobExecutionResultHandler.java:94) > 2021-05-14T00:24:57.8495823Z May 14 00:24:57 at > org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.lambda$handleRequest$1(JobExecutionResultHandler.java:84) > 2021-05-14T00:24:57.8496733Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884) > 2021-05-14T00:24:57.8497640Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866) > 2021-05-14T00:24:57.8498491Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-05-14T00:24:57.8499222Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) > 2021-05-14T00:24:57.853Z May 14 00:24:57 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:234) > 2021-05-14T00:24:57.8500872Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-05-14T00:24:57.8501702Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-05-14T00:24:57.8502662Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-05-14T00:24:57.8503472Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) > 2021-05-14T00:24:57.8504269Z May 14 00:24:57 at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1079) > 2021-05-14T00:24:57.8504892Z May 14 00:24:57 at > akka.dispatch.OnComplete.internal(Future.scala:263) > 2021-05-14T00:24:57.8505565Z May 14 00:24:57 at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-05-14T00:24:57.8506062Z May 14 00:24:57 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-05-14T00:24:57.8506819Z May 14 00:24:57 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-05-14T00:24:57.8507418Z May 14 00:24:57 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-05-14T00:24:57.8508373Z May 14 00:24:57 at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-05-14T00:24:57.8509144Z May 14 00:24:57 at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) > 2021-05-14T00:24:57.8509972Z May 14 00:24:57 at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) > 2021-05-14T00:24:57.8510675Z May 14 00:24:57 at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) > 2021-05-14T00:24:57.8511376Z May 14 00:24:57 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23) > 2021-05-14T00:24:57.851Z May 14 00:24:57 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-05-14T00:24:57.8513090Z May 14 00:24:57 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) > 2021-05-14T00:24:57.8513835Z May 14 00:24:57 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) > 2021-05-14T00:24:57.8514576Z May 14 00:24:57 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-05-14T00:24:57.8515344Z May 14 00:24:57 at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > 2021-05-14T00
[jira] [Updated] (FLINK-22662) YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint fail
[ https://issues.apache.org/jira/browse/FLINK-22662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-22662: - Affects Version/s: (was: 1.13.0) 1.13.1 > YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint fail > > > Key: FLINK-22662 > URL: https://issues.apache.org/jira/browse/FLINK-22662 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.14.0, 1.13.1 >Reporter: Guowei Ma >Assignee: Xintong Song >Priority: Major > Labels: test-stability > > {code:java} > 2021-05-14T00:24:57.8487649Z May 14 00:24:57 [ERROR] > testKillYarnSessionClusterEntrypoint(org.apache.flink.yarn.YARNHighAvailabilityITCase) > Time elapsed: 34.667 s <<< ERROR! > 2021-05-14T00:24:57.8488567Z May 14 00:24:57 > java.util.concurrent.ExecutionException: > 2021-05-14T00:24:57.8489301Z May 14 00:24:57 > org.apache.flink.runtime.rest.util.RestClientException: > [org.apache.flink.runtime.rest.handler.RestHandlerException: > org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find > Flink job (610ed4b159ece04c8ee2ec40e7d0c143) > 2021-05-14T00:24:57.8493142Z May 14 00:24:57 at > org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.propagateException(JobExecutionResultHandler.java:94) > 2021-05-14T00:24:57.8495823Z May 14 00:24:57 at > org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.lambda$handleRequest$1(JobExecutionResultHandler.java:84) > 2021-05-14T00:24:57.8496733Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884) > 2021-05-14T00:24:57.8497640Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866) > 2021-05-14T00:24:57.8498491Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-05-14T00:24:57.8499222Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) > 2021-05-14T00:24:57.853Z May 14 00:24:57 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:234) > 2021-05-14T00:24:57.8500872Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-05-14T00:24:57.8501702Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-05-14T00:24:57.8502662Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-05-14T00:24:57.8503472Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) > 2021-05-14T00:24:57.8504269Z May 14 00:24:57 at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1079) > 2021-05-14T00:24:57.8504892Z May 14 00:24:57 at > akka.dispatch.OnComplete.internal(Future.scala:263) > 2021-05-14T00:24:57.8505565Z May 14 00:24:57 at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-05-14T00:24:57.8506062Z May 14 00:24:57 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-05-14T00:24:57.8506819Z May 14 00:24:57 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-05-14T00:24:57.8507418Z May 14 00:24:57 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-05-14T00:24:57.8508373Z May 14 00:24:57 at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-05-14T00:24:57.8509144Z May 14 00:24:57 at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) > 2021-05-14T00:24:57.8509972Z May 14 00:24:57 at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) > 2021-05-14T00:24:57.8510675Z May 14 00:24:57 at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) > 2021-05-14T00:24:57.8511376Z May 14 00:24:57 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23) > 2021-05-14T00:24:57.851Z May 14 00:24:57 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-05-14T00:24:57.8513090Z May 14 00:24:57 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) > 2021-05-14T00:24:57.8513835Z May 14 00:24:57 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) > 2021-05-14T00:24:57.8514576Z May 14 00:24:57 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-05-14T00:24:57.8515344Z May 14 00:24:57 at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(B
[jira] [Commented] (FLINK-22662) YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint fail
[ https://issues.apache.org/jira/browse/FLINK-22662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365414#comment-17365414 ] Xintong Song commented on FLINK-22662: -- Unfortunately, I'm not able to confirm our hypothesis locally. I tried to reproduce the problem by adding a sleep time in the shutdown hook of {{ClusterEntrypoin}}. However, it never happen to me that a new AM is brought up before the old one is completely shutdown. I'm adding a bit more logs for now, see if it can help us understand what's going on when this fails again. > YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint fail > > > Key: FLINK-22662 > URL: https://issues.apache.org/jira/browse/FLINK-22662 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.14.0, 1.13.1 >Reporter: Guowei Ma >Assignee: Xintong Song >Priority: Major > Labels: test-stability > > {code:java} > 2021-05-14T00:24:57.8487649Z May 14 00:24:57 [ERROR] > testKillYarnSessionClusterEntrypoint(org.apache.flink.yarn.YARNHighAvailabilityITCase) > Time elapsed: 34.667 s <<< ERROR! > 2021-05-14T00:24:57.8488567Z May 14 00:24:57 > java.util.concurrent.ExecutionException: > 2021-05-14T00:24:57.8489301Z May 14 00:24:57 > org.apache.flink.runtime.rest.util.RestClientException: > [org.apache.flink.runtime.rest.handler.RestHandlerException: > org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find > Flink job (610ed4b159ece04c8ee2ec40e7d0c143) > 2021-05-14T00:24:57.8493142Z May 14 00:24:57 at > org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.propagateException(JobExecutionResultHandler.java:94) > 2021-05-14T00:24:57.8495823Z May 14 00:24:57 at > org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.lambda$handleRequest$1(JobExecutionResultHandler.java:84) > 2021-05-14T00:24:57.8496733Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884) > 2021-05-14T00:24:57.8497640Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866) > 2021-05-14T00:24:57.8498491Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-05-14T00:24:57.8499222Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) > 2021-05-14T00:24:57.853Z May 14 00:24:57 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:234) > 2021-05-14T00:24:57.8500872Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-05-14T00:24:57.8501702Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-05-14T00:24:57.8502662Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-05-14T00:24:57.8503472Z May 14 00:24:57 at > java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) > 2021-05-14T00:24:57.8504269Z May 14 00:24:57 at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1079) > 2021-05-14T00:24:57.8504892Z May 14 00:24:57 at > akka.dispatch.OnComplete.internal(Future.scala:263) > 2021-05-14T00:24:57.8505565Z May 14 00:24:57 at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-05-14T00:24:57.8506062Z May 14 00:24:57 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-05-14T00:24:57.8506819Z May 14 00:24:57 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-05-14T00:24:57.8507418Z May 14 00:24:57 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-05-14T00:24:57.8508373Z May 14 00:24:57 at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-05-14T00:24:57.8509144Z May 14 00:24:57 at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) > 2021-05-14T00:24:57.8509972Z May 14 00:24:57 at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) > 2021-05-14T00:24:57.8510675Z May 14 00:24:57 at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) > 2021-05-14T00:24:57.8511376Z May 14 00:24:57 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23) > 2021-05-14T00:24:57.851Z May 14 00:24:57 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-05-14T00:24:57.8513090Z May 14 00:24:57 at > scala.concurrent.Future$$anonfun$andThen$1.apply(
[jira] [Updated] (FLINK-23018) State factories should handle extended state descriptors
[ https://issues.apache.org/jira/browse/FLINK-23018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Tang updated FLINK-23018: - Summary: State factories should handle extended state descriptors (was: TTL state factory should handle extended state descriptors) > State factories should handle extended state descriptors > > > Key: FLINK-23018 > URL: https://issues.apache.org/jira/browse/FLINK-23018 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends >Reporter: Yun Tang >Assignee: Yun Tang >Priority: Major > Fix For: 1.14.0 > > > Currently, {{TtlStateFactory}} can only handle fixed type of state > descriptors. As {{ValueStateDescriptor}} is not a final class and user could > still extend it, however, {{TtlStateFactory}} cannot recognize the extending > class. > {{TtlStateFactory}} should use {{StateDescriptor#Type}} to check what kind > of state is. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-16090) Translate "Table API" page of "Table API & SQL" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365419#comment-17365419 ] ji yinglong commented on FLINK-16090: - [~jark] , could you assign it to me? I'd like to take this issue. > Translate "Table API" page of "Table API & SQL" into Chinese > - > > Key: FLINK-16090 > URL: https://issues.apache.org/jira/browse/FLINK-16090 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Jark Wu >Priority: Major > Labels: auto-unassigned > > The page url is > https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/tableApi.html > The markdown file is located in {{flink/docs/dev/table/tableApi.zh.md}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23018) State factories should handle extended state descriptors
[ https://issues.apache.org/jira/browse/FLINK-23018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Tang updated FLINK-23018: - Description: Currently, {{TtlStateFactory}} and other state factories can only handle fixed type of state descriptors. As {{ValueStateDescriptor}} is not a final class and user could still extend it, however, {{TtlStateFactory}} cannot recognize the extending class. {{TtlStateFactory}} should use {{StateDescriptor#Type}} to check what kind of state is. was: Currently, {{TtlStateFactory}} can only handle fixed type of state descriptors. As {{ValueStateDescriptor}} is not a final class and user could still extend it, however, {{TtlStateFactory}} cannot recognize the extending class. {{TtlStateFactory}} should use {{StateDescriptor#Type}} to check what kind of state is. > State factories should handle extended state descriptors > > > Key: FLINK-23018 > URL: https://issues.apache.org/jira/browse/FLINK-23018 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends >Reporter: Yun Tang >Assignee: Yun Tang >Priority: Major > Fix For: 1.14.0 > > > Currently, {{TtlStateFactory}} and other state factories can only handle > fixed type of state descriptors. As {{ValueStateDescriptor}} is not a final > class and user could still extend it, however, {{TtlStateFactory}} cannot > recognize the extending class. > {{TtlStateFactory}} should use {{StateDescriptor#Type}} to check what kind > of state is. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18464) ClassCastException during namespace serialization for checkpoint (Heap)
[ https://issues.apache.org/jira/browse/FLINK-18464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365421#comment-17365421 ] guxiang commented on FLINK-18464: - Hi, I ran into the same problem, and I can reproduce it precisely. I looked at all versions of Flink in 1.13.1 and all versions had this problem. The reason it appears is that State is used in Trigger and is also used in TimeWindow, then it will be a bug if you try to make a checkpoint. {code:java} org.apache.flink.util.FlinkRuntimeException: Error while adding data to RocksDB at org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:109) at com.sankuai.grocery.crm.data.mallorg.trigger.MessageProcessOnTimeStateTrigger.onEventTime(MessageProcessOnTimeStateTrigger.java:116) at com.sankuai.grocery.crm.data.mallorg.trigger.MessageProcessOnTimeStateTrigger.onEventTime(MessageProcessOnTimeStateTrigger.java:23) at org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:944) at org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:481) at org.apache.flink.streaming.api.operators.InternalTimerServiceImpl.advanceWatermark(InternalTimerServiceImpl.java:302) at org.apache.flink.streaming.api.operators.InternalTimeServiceManagerImpl.advanceWatermark(InternalTimeServiceManagerImpl.java:194) at org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:626) at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitWatermark(OneInputStreamTask.java:197) at org.apache.flink.streaming.runtime.streamstatus.StatusWatermarkValve.findAndOutputNewMinWatermarkAcrossAlignedChannels(StatusWatermarkValve.java:196) at org.apache.flink.streaming.runtime.streamstatus.StatusWatermarkValve.inputWatermark(StatusWatermarkValve.java:105) at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:206) at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:174) at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:396) at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191) at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:617) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:581) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.ClassCastException: org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to org.apache.flink.runtime.state.VoidNamespace at org.apache.flink.runtime.state.VoidNamespaceSerializer.serialize(VoidNamespaceSerializer.java:30) at org.apache.flink.contrib.streaming.state.RocksDBKeySerializationUtils.writeNameSpace(RocksDBKeySerializationUtils.java:78) at org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.serializeNamespace(RocksDBSerializedCompositeKeyBuilder.java:175) at org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.buildCompositeKeyNamespace(RocksDBSerializedCompositeKeyBuilder.java:112) at org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeCurrentKeyWithGroupAndNamespace(AbstractRocksDBState.java:163) at org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:106) ... 20 more {code} > ClassCastException during namespace serialization for checkpoint (Heap) > --- > > Key: FLINK-18464 > URL: https://issues.apache.org/jira/browse/FLINK-18464 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / Task >Affects Versions: 1.9.3 >Reporter: Roman Khachatryan >Priority: Major > > From > [thread|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpoint-failed-because-of-TimeWindow-cannot-be-cast-to-VoidNamespace-td36310.html] > {quote}I'm using flink 1.9 on Mesos and I try to use my own trigger and > evictor. The state is stored to memory. > {quote} > > > {code:java} > input.setParallelism(processParallelism) > .assignTimestampsAndWatermarks(new UETimeAssigner) > .keyBy(_.key) > .window(TumblingEventTimeWindows.of(Time.minutes(20))) > .trigger(new MyTrigger) > .
[jira] [Commented] (FLINK-12941) Translate "Amazon AWS Kinesis Streams Connector" page into Chinese
[ https://issues.apache.org/jira/browse/FLINK-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365423#comment-17365423 ] zhangzhl commented on FLINK-12941: -- hi [~jark] I'm willing to translate this page. Could you assign it to me ? > Translate "Amazon AWS Kinesis Streams Connector" page into Chinese > -- > > Key: FLINK-12941 > URL: https://issues.apache.org/jira/browse/FLINK-12941 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Jark Wu >Priority: Minor > Labels: auto-unassigned > > Translate the internal page > "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kinesis.html"; > into Chinese. > > The doc located in "flink/docs/dev/connectors/kinesis.zh.md" -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-23024) RPC result TaskManagerInfoWithSlots not serializable
[ https://issues.apache.org/jira/browse/FLINK-23024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann closed FLINK-23024. - Resolution: Fixed Fixed via master: 1a3c797ea3f20efff9246d30c76fbdd6a45e9030 1.13.2: 9f90d4b31f1e2316b0e01a10cb0d727ab647677e > RPC result TaskManagerInfoWithSlots not serializable > > > Key: FLINK-23024 > URL: https://issues.apache.org/jira/browse/FLINK-23024 > Project: Flink > Issue Type: Bug > Components: Runtime / REST >Affects Versions: 1.14.0, 1.13.1 >Reporter: Arvid Heise >Assignee: Yangze Guo >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0, 1.13.2 > > > A user reported the following stacktrace while accessing web UI. > {noformat} > Unhandled exception. > org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Failed > to serialize the result for RPC call : requestTaskManagerDetailsInfo. > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.serializeRemoteResultAndVerifySize(AkkaRpcActor.java:404) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$sendAsyncResponse$0(AkkaRpcActor.java:360) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836) > ~[?:1.8.0_251] at > java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:848) > ~[?:1.8.0_251] at > java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2168) > ~[?:1.8.0_251] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.sendAsyncResponse(AkkaRpcActor.java:352) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:319) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.actor.Actor$class.aroundReceive(Actor.scala:517) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.actor.ActorCell.invoke(ActorCell.scala:561) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.Mailbox.run(Mailbox.scala:225) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.Mailbox.exec(Mailbox.scala:235) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > [flink-dist_2.11-1.13.1.jar:1.13.1] at > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > [flink-dist_2.11-1.13.1.jar:1.13.1] Caused by: > java.io.NotSerializableException: > org.apache.flink.runtime.resourcemanager.TaskManagerInfoWithSlots at > java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184) > ~[?:1.8.0_251] at > java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348) > ~[?:1.8.0_251] at > org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:624) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcSerializedValue.valueOf(AkkaRpcSerializedValue.java:66) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.serializeRemoteResultAndVerifySize(AkkaRpcActor.java:387) > ~[flink-dist_2.11-1.13.1.jar:1.13.1
[jira] [Created] (FLINK-23033) Support object array in Python DataStream API
Dian Fu created FLINK-23033: --- Summary: Support object array in Python DataStream API Key: FLINK-23033 URL: https://issues.apache.org/jira/browse/FLINK-23033 Project: Flink Issue Type: Improvement Components: API / Python Reporter: Dian Fu Fix For: 1.14.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11733) Provide HadoopMapFunction for org.apache.hadoop.mapreduce.Mapper
[ https://issues.apache.org/jira/browse/FLINK-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11733: --- Labels: auto-deprioritized-major auto-unassigned pull-request-available (was: auto-unassigned pull-request-available stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Provide HadoopMapFunction for org.apache.hadoop.mapreduce.Mapper > > > Key: FLINK-11733 > URL: https://issues.apache.org/jira/browse/FLINK-11733 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hadoop Compatibility >Reporter: vinoyang >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently, Flink only support > {{org.apache.flink.hadoopcompatibility.mapred.Mapper}} in module > flink-hadoop-compatibility. I think we also need to support Hadoop new Mapper > API : {{org.apache.hadoop.mapreduce.Mapper}}. We can implement a new > {{HadoopMapFunction}} to wrap {{org.apache.hadoop.mapreduce.Mapper}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11698) Add readMapRedTextFile API for HadoopInputs
[ https://issues.apache.org/jira/browse/FLINK-11698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11698: --- Labels: auto-deprioritized-major auto-unassigned pull-request-available (was: auto-unassigned pull-request-available stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Add readMapRedTextFile API for HadoopInputs > --- > > Key: FLINK-11698 > URL: https://issues.apache.org/jira/browse/FLINK-11698 > Project: Flink > Issue Type: New Feature > Components: Connectors / Hadoop Compatibility >Reporter: vinoyang >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Considering that {{TextInputFormat}} is a very common {{InputFormat}}, I > think it's valuable to provide such a convenient API for users, just like > {{readMapRedSequenceFile}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11672) Add example for streaming operators's connect
[ https://issues.apache.org/jira/browse/FLINK-11672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11672: --- Labels: auto-deprioritized-major auto-unassigned pull-request-available (was: auto-unassigned pull-request-available stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Add example for streaming operators's connect > --- > > Key: FLINK-11672 > URL: https://issues.apache.org/jira/browse/FLINK-11672 > Project: Flink > Issue Type: Improvement > Components: Examples >Reporter: shengjk1 >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > add example for streaming operators's connect such as > \{{datastream1.connect(datastream2)}} in code -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11673) add example for streaming operators's broadcast
[ https://issues.apache.org/jira/browse/FLINK-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11673: --- Labels: auto-deprioritized-major auto-unassigned pull-request-available (was: auto-unassigned pull-request-available stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > add example for streaming operators's broadcast > --- > > Key: FLINK-11673 > URL: https://issues.apache.org/jira/browse/FLINK-11673 > Project: Flink > Issue Type: Improvement > Components: Examples >Reporter: shengjk1 >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > add example for streaming operators's broadcast in code -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11695) Make sharedStateDir could create sub-directories to avoid MaxDirectoryItemsExceededException
[ https://issues.apache.org/jira/browse/FLINK-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11695: --- Labels: auto-deprioritized-major auto-unassigned (was: auto-unassigned stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Make sharedStateDir could create sub-directories to avoid > MaxDirectoryItemsExceededException > > > Key: FLINK-11695 > URL: https://issues.apache.org/jira/browse/FLINK-11695 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing >Reporter: Yun Tang >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned > > We meet this annoying problem many times when the {{sharedStateDir}} in > checkpoint path exceeded the directory items limit due to large checkpoint: > {code:java} > org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException: > The directory item limit of xxx is exceeded: limit=1048576 items=1048576 > {code} > Currently, our solution is to let {{FsCheckpointStorage}} could create > sub-dirs when calling {{resolveCheckpointStorageLocation}}. The default value > for the number of sub-dirs is zero, just keep backward compatibility as > current situation. The created sub-dirs have the name as integer value of > [{{0, num-of-sub-dirs}}) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11616) Flink official document has an error
[ https://issues.apache.org/jira/browse/FLINK-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11616: --- Labels: auto-deprioritized-major auto-unassigned (was: auto-unassigned stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Flink official document has an error > > > Key: FLINK-11616 > URL: https://issues.apache.org/jira/browse/FLINK-11616 > Project: Flink > Issue Type: Bug > Components: Documentation >Reporter: xulinjie >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned > Attachments: wx20190214-214...@2x.png > > > The page url is > [https://ci.apache.org/projects/flink/flink-docs-master/tutorials/flink_on_windows.html] > The mistake is in paragraph “Installing Flink from Git”. > “The solution is to adjust the Cygwin settings to deal with the correct line > endings by following these three steps:”, > The sequence of steps you wrote was "1, 2, 1".But I think you might want to > write "1, 2, 3". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11734) Provide HadoopReduceFunction for org.apache.hadoop.mapreduce.Reducer
[ https://issues.apache.org/jira/browse/FLINK-11734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11734: --- Labels: auto-deprioritized-major auto-unassigned (was: auto-unassigned stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Provide HadoopReduceFunction for org.apache.hadoop.mapreduce.Reducer > > > Key: FLINK-11734 > URL: https://issues.apache.org/jira/browse/FLINK-11734 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hadoop Compatibility >Reporter: vinoyang >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned > > Currently, Flink only support > {{org.apache.flink.hadoopcompatibility.mapred.Reducer}} in module > flink-hadoop-compatibility. I think we also need to support Hadoop new Mapper > API : {{org.apache.hadoop.mapreduce.Reducer}}. We can implement a new > {{HadoopMapFunction}} to wrap {{org.apache.hadoop.mapreduce.Reducer}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11697) Add readMapReduceTextFile API for HadoopInputs
[ https://issues.apache.org/jira/browse/FLINK-11697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11697: --- Labels: auto-deprioritized-major auto-unassigned pull-request-available (was: auto-unassigned pull-request-available stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Add readMapReduceTextFile API for HadoopInputs > -- > > Key: FLINK-11697 > URL: https://issues.apache.org/jira/browse/FLINK-11697 > Project: Flink > Issue Type: New Feature > Components: Connectors / Hadoop Compatibility >Reporter: vinoyang >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Considering that {{TextInputFormat}} is a very common {{InputFormat}}, I > think it's valuable to provide such a convenient API for users, just like > {{readMapReduceSequenceFile}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11470) LocalEnvironment doesn't call FileSystem.initialize()
[ https://issues.apache.org/jira/browse/FLINK-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11470: --- Labels: auto-deprioritized-major auto-unassigned (was: auto-unassigned stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > LocalEnvironment doesn't call FileSystem.initialize() > - > > Key: FLINK-11470 > URL: https://issues.apache.org/jira/browse/FLINK-11470 > Project: Flink > Issue Type: Bug > Components: Runtime / Task >Affects Versions: 1.6.2 >Reporter: Nico Kruber >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned > > Proper Flink cluster components, e.g. task manager or job manager, initialize > configured file systems with their parsed {{Configuration}} objects. However, > the {{LocalEnvironment}} does not seem to do that and we therefore lack the > ability to configure access credentials etc like in the following example: > {code} > Configuration config = new Configuration(); > config.setString("s3.access-key", "user"); > config.setString("s3.secret-key", "secret"); > //FileSystem.initialize(config); > final ExecutionEnvironment exEnv = > ExecutionEnvironment.createLocalEnvironment(config); > {code} > The workaround is to call {{FileSystem.initialize(config);}} yourself but it > is actually surprising that this is not done automatically. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11639) Provide readSequenceFile for Hadoop new API
[ https://issues.apache.org/jira/browse/FLINK-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11639: --- Labels: auto-deprioritized-major auto-unassigned pull-request-available (was: auto-unassigned pull-request-available stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Provide readSequenceFile for Hadoop new API > --- > > Key: FLINK-11639 > URL: https://issues.apache.org/jira/browse/FLINK-11639 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hadoop Compatibility >Reporter: vinoyang >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Now {{HadoopInputs}} just provide a {{readSequenceFile}} for > {{org.apache.hadoop.mapred.SequenceFileInputFormat}} , it would be better to > provide another {{readSequenceFile}} for > {{org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11619) Make ScheduleMode configurable via user code or configuration file
[ https://issues.apache.org/jira/browse/FLINK-11619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11619: --- Labels: auto-deprioritized-major auto-unassigned (was: auto-unassigned stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Make ScheduleMode configurable via user code or configuration file > --- > > Key: FLINK-11619 > URL: https://issues.apache.org/jira/browse/FLINK-11619 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Reporter: yuqi >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned > > Currently, Schedule mode for stream job is always > see StreamingJobGraphGenerator#createJobGraph > {code:java} > // make sure that all vertices start immediately > jobGraph.setScheduleMode(ScheduleMode.EAGER); > {code} > on this point, we can make ScheduleMode configurable to user so as to adapt > different environment. Users can set this option via env.setScheduleMode() in > code, or make it optional in configuration. > Anyone's help and suggestions is welcomed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11569) Row type does not serialize in to readable format when invoke "toString" method
[ https://issues.apache.org/jira/browse/FLINK-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11569: --- Labels: auto-deprioritized-major auto-unassigned pull-request-available (was: auto-unassigned pull-request-available stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Row type does not serialize in to readable format when invoke "toString" > method > --- > > Key: FLINK-11569 > URL: https://issues.apache.org/jira/browse/FLINK-11569 > Project: Flink > Issue Type: Bug > Components: API / Type Serialization System >Reporter: Rong Rong >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Seems like the "toString" method for Row type is only concatenating all > fields using COMMA ",". However it does not wrap the entire Row in some type > of encapsulation, for example "()". This results in nested Row being > serialized as if they are all in one level. > For example: > {code:java} > Row.of("a", 1, Row.of("b", 2)) > {code} > is printed out as > {code:java} > "a",1,"b",2 > {code} > Problematic piece of code can be found here: > [https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java#L87] > New changes should be simple to have a dedicated wrapper for the "row" > stringify format, something like: > {code:java} > ("a",1,("b",2)) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11620) table example add kafka
[ https://issues.apache.org/jira/browse/FLINK-11620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11620: --- Labels: auto-deprioritized-major auto-unassigned (was: auto-unassigned stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > table example add kafka > --- > > Key: FLINK-11620 > URL: https://issues.apache.org/jira/browse/FLINK-11620 > Project: Flink > Issue Type: Improvement > Components: Examples >Reporter: lining >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11454) Support MergedStream operation
[ https://issues.apache.org/jira/browse/FLINK-11454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11454: --- Labels: auto-deprioritized-major auto-unassigned (was: auto-unassigned stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Support MergedStream operation > -- > > Key: FLINK-11454 > URL: https://issues.apache.org/jira/browse/FLINK-11454 > Project: Flink > Issue Type: New Feature > Components: API / DataStream >Reporter: Rong Rong >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned > > {{Following SlicedStream, the mergedStream operator merges results from > sliced stream and produces windowing results. > {code:java} > val slicedStream: SlicedStream = inputStream > .keyBy("key") > .sliceWindow(Time.seconds(5L)) // new “slice window” concept: to > combine >// tumble results based on discrete >// non-overlapping windows. > .aggregate(aggFunc) > val mergedStream1: MergedStream = slicedStream > .slideOver(Time.second(10L)) // combine slice results with same > >// windowing function, equivalent to >// WindowOperator with an aggregate > state >// and derived aggregate function. > val mergedStream2: MergedStream = slicedStream > .slideOver(Count.of(5)) > .apply(windowFunction) // apply a different window function > over >// the sliced results.{code} > MergedStream are produced by MergeOperator: > {{slideOver}} and {{apply}} can be combined into a {{OVER AGGREGATE}} > implementation similar to the one in TableAPI. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-11543) Type mismatch AssertionError in FilterJoinRule
[ https://issues.apache.org/jira/browse/FLINK-11543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-11543: --- Labels: auto-deprioritized-major auto-unassigned (was: auto-unassigned stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Type mismatch AssertionError in FilterJoinRule > --- > > Key: FLINK-11543 > URL: https://issues.apache.org/jira/browse/FLINK-11543 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.7.1 >Reporter: Timo Walther >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned > Attachments: Test.java > > > The following problem is copied from the user mailing list: > {code} > Exception in thread "main" java.lang.AssertionError: mismatched type $5 > TIMESTAMP(3) > at > org.apache.calcite.rex.RexUtil$FixNullabilityShuttle.visitInputRef(RexUtil.java:2481) > at > org.apache.calcite.rex.RexUtil$FixNullabilityShuttle.visitInputRef(RexUtil.java:2459) > at org.apache.calcite.rex.RexInputRef.accept(RexInputRef.java:112) > at org.apache.calcite.rex.RexShuttle.visitList(RexShuttle.java:151) > at org.apache.calcite.rex.RexShuttle.visitCall(RexShuttle.java:100) > at org.apache.calcite.rex.RexShuttle.visitCall(RexShuttle.java:34) > at org.apache.calcite.rex.RexCall.accept(RexCall.java:107) > at org.apache.calcite.rex.RexShuttle.apply(RexShuttle.java:279) > at org.apache.calcite.rex.RexShuttle.mutate(RexShuttle.java:241) > at org.apache.calcite.rex.RexShuttle.apply(RexShuttle.java:259) > at org.apache.calcite.rex.RexUtil.fixUp(RexUtil.java:1605) > at > org.apache.calcite.rel.rules.FilterJoinRule.perform(FilterJoinRule.java:230) > at > org.apache.calcite.rel.rules.FilterJoinRule$FilterIntoJoinRule.onMatch(FilterJoinRule.java:344) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:212) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:646) > at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:339) > at > org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:374) > at > org.apache.flink.table.api.TableEnvironment.optimizeLogicalPlan(TableEnvironment.scala:292) > at > org.apache.flink.table.api.StreamTableEnvironment.optimize(StreamTableEnvironment.scala:812) > at > org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:860) > at > org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:340) > at > org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:272) > at test.Test.main(Test.java:78) > {code} > It sounds related to FLINK-10211. A runnable example is attached. > See also: > https://lists.apache.org/thread.html/9a9a979f4344111baf053a51ebfa2f2a0ba31e4d5a70e633dbcae254@%3Cuser.flink.apache.org%3E -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22955) lookup join filter push down result to mismatch function signature
[ https://issues.apache.org/jira/browse/FLINK-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365457#comment-17365457 ] JING ZHANG commented on FLINK-22955: Hi [~gsavl]. You could disable `CoreRules.FILTER_INTO_JOIN` to disable push filter into join. But it seems a bit hack... Would you please consider the previous solution again? Perhaps i didn't explain clearly. You only need implement one `eval` method with variable arguments length instead of implements many `eval` methods. (please check the first solution in above demo code). AFAIK, `JdbcDynamicTableSource` implements `JdbcRowDataLookupFunction` with this way, Would you please double check if it could help? > lookup join filter push down result to mismatch function signature > -- > > Key: FLINK-22955 > URL: https://issues.apache.org/jira/browse/FLINK-22955 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.3, 1.13.1, 1.12.4 > Environment: Flink 1.13.1 > how to reproduce: patch file attached >Reporter: Cooper Luan >Priority: Critical > Fix For: 1.11.4, 1.12.5, 1.13.2 > > Attachments: > 0001-try-to-produce-lookup-join-filter-pushdown-expensive.patch > > > a sql like this may result to look function signature mismatch exception when > explain sql > {code:sql} > CREATE TEMPORARY VIEW v_vvv AS > SELECT * FROM MyTable AS T > JOIN LookupTableAsync1 FOR SYSTEM_TIME AS OF T.proctime AS D > ON T.a = D.id; > SELECT a,b,id,name > FROM v_vvv > WHERE age = 10;{code} > the lookup function is > {code:scala} > class AsyncTableFunction1 extends AsyncTableFunction[RowData] { > def eval(resultFuture: CompletableFuture[JCollection[RowData]], a: > Integer): Unit = { > } > }{code} > exec plan is > {code:java} > LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], > fields=[a, b, id, name]) > +- LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], > joinType=[InnerJoin], async=[true], lookup=[age=10, id=a], where=[(age = > 10)], select=[a, b, id, name]) >+- Calc(select=[a, b]) > +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], > fields=[a, b, c, proctime, rowtime]) > {code} > the "lookup=[age=10, id=a]" result to mismatch signature mismatch > > but if I add 1 more insert, it works well > {code:sql} > SELECT a,b,id,name > FROM v_vvv > WHERE age = 30 > {code} > exec plan is > {code:java} > == Optimized Execution Plan == > LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], > joinType=[InnerJoin], async=[true], lookup=[id=a], select=[a, b, c, proctime, > rowtime, id, name, age, ts])(reuse_id=[1]) > +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], > fields=[a, b, c, proctime, > rowtime])LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], > fields=[a, b, id, name]) > +- Calc(select=[a, b, id, name], where=[(age = 10)]) >+- > Reused(reference_id=[1])LegacySink(name=[`default_catalog`.`default_database`.`appendSink2`], > fields=[a, b, id, name]) > +- Calc(select=[a, b, id, name], where=[(age = 30)]) >+- Reused(reference_id=[1]) > {code} > the LookupJoin node use "lookup=[id=a]"(right) not "lookup=[age=10, id=a]" > (wrong) > > so, in "multi insert" case, planner works great > in "single insert" case, planner throw exception -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-21308) Cancel "sendAfter"
[ https://issues.apache.org/jira/browse/FLINK-21308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365474#comment-17365474 ] Igal Shilman commented on FLINK-21308: -- Hi [~stephanpelikan], I think that, in the scope of a specific address, message ids should be unique, and providing the same message id should be considered an error (by StateFun) > Cancel "sendAfter" > -- > > Key: FLINK-21308 > URL: https://issues.apache.org/jira/browse/FLINK-21308 > Project: Flink > Issue Type: New Feature > Components: Stateful Functions >Reporter: Stephan Pelikan >Assignee: Igal Shilman >Priority: Major > Labels: auto-deprioritized-major, stale-major > Fix For: statefun-3.1.0 > > > As a user I want to cancel delayed sent messages not needed any more to keep > state clean. > Use case: > {quote}My use-case is processing business events of customers. Those events > are triggered by ourself or by the customer depending of what's the current > state of the ongoing customer's business use-case. We need to monitor > delayed/missing business events which belong to previous events. For example: > the customer has to confirm something we did. Depending on what it is the > confirmation has to be within hours, days or even months. If there is a delay > we need to know. But if the customer confirms in time we want to cleanup to > keep the state small. > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-21308) Cancel "sendAfter"
[ https://issues.apache.org/jira/browse/FLINK-21308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365474#comment-17365474 ] Igal Shilman edited comment on FLINK-21308 at 6/18/21, 12:10 PM: - Hi [~stephanpelikan], I think that, in the scope of a specific address, message ids should be unique, and providing the same message id should be considered an error. (by StateFun) was (Author: igal): Hi [~stephanpelikan], I think that, in the scope of a specific address, message ids should be unique, and providing the same message id should be considered an error (by StateFun) > Cancel "sendAfter" > -- > > Key: FLINK-21308 > URL: https://issues.apache.org/jira/browse/FLINK-21308 > Project: Flink > Issue Type: New Feature > Components: Stateful Functions >Reporter: Stephan Pelikan >Assignee: Igal Shilman >Priority: Major > Labels: auto-deprioritized-major, stale-major > Fix For: statefun-3.1.0 > > > As a user I want to cancel delayed sent messages not needed any more to keep > state clean. > Use case: > {quote}My use-case is processing business events of customers. Those events > are triggered by ourself or by the customer depending of what's the current > state of the ongoing customer's business use-case. We need to monitor > delayed/missing business events which belong to previous events. For example: > the customer has to confirm something we did. Depending on what it is the > confirmation has to be within hours, days or even months. If there is a delay > we need to know. But if the customer confirms in time we want to cleanup to > keep the state small. > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23034) NPE in JobDetailsDeserializer during the reading old version of ExecutionState
Anton Kalashnikov created FLINK-23034: - Summary: NPE in JobDetailsDeserializer during the reading old version of ExecutionState Key: FLINK-23034 URL: https://issues.apache.org/jira/browse/FLINK-23034 Project: Flink Issue Type: Bug Reporter: Anton Kalashnikov Assignee: Anton Kalashnikov There is no compatibility for ExecutionState: {noformat} java.lang.NullPointerExceptionjava.lang.NullPointerException at org.apache.flink.runtime.messages.webmonitor.JobDetails$JobDetailsDeserializer.deserialize(JobDetails.java:308) at org.apache.flink.runtime.messages.webmonitor.JobDetails$JobDetailsDeserializer.deserialize(JobDetails.java:278) at org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322) at org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4593) at org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3479) at org.apache.flink.runtime.messages.webmonitor.JobDetailsTest.testJobDetailsCompatibleUnmarshalling(JobDetailsTest.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54) {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23034) NPE in JobDetailsDeserializer during the reading old version of ExecutionState
[ https://issues.apache.org/jira/browse/FLINK-23034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Kalashnikov updated FLINK-23034: -- Description: There is no compatibility for ExecutionState: {noformat} java.lang.NullPointerException at org.apache.flink.runtime.messages.webmonitor.JobDetails$JobDetailsDeserializer.deserialize(JobDetails.java:308) at org.apache.flink.runtime.messages.webmonitor.JobDetails$JobDetailsDeserializer.deserialize(JobDetails.java:278) at org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322) at org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4593) at org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3479) at org.apache.flink.runtime.messages.webmonitor.JobDetailsTest.testJobDetailsCompatibleUnmarshalling(JobDetailsTest.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54) {noformat} was: There is no compatibility for ExecutionState: {noformat} java.lang.NullPointerExceptionjava.lang.NullPointerException at org.apache.flink.runtime.messages.webmonitor.JobDetails$JobDetailsDeserializer.deserialize(JobDetails.java:308) at org.apache.flink.runtime.messages.webmonitor.JobDetails$JobDetailsDeserializer.deserialize(JobDetails.java:278) at org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322) at org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4593) at org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3479) at org.apache.flink.runtime.messages.webmonitor.JobDetailsTest.testJobDetailsCompatibleUnmarshalling(JobDetailsTest.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) at org.junit.ru
[jira] [Assigned] (FLINK-22678) Hide ChangelogStateBackend From Users
[ https://issues.apache.org/jira/browse/FLINK-22678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li reassigned FLINK-22678: - Assignee: Zakelly Lan (was: Yuan Mei) > Hide ChangelogStateBackend From Users > -- > > Key: FLINK-22678 > URL: https://issues.apache.org/jira/browse/FLINK-22678 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Yuan Mei >Assignee: Zakelly Lan >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0 > > > Per discussion on the mailing thread: > https://lists.apache.org/thread.html/ra178ce29088b1da362d98a5a6d8c7be48051caf1637ee24261738217%40%3Cdev.flink.apache.org%3E > We decide to make a refined version of loading ChangelogStateBackend: > - Define consistent override and combination policy (flag + state backend) > in different config levels > - Define explicitly the meaning of "enable flag" = true/false/unset > - Hide ChangelogStateBackend from users > Details described in > https://docs.google.com/document/d/13AaCf5fczYTDHZ4G1mgYL685FqbnoEhgo0cdwuJlZmw/edit?usp=sharing -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23028) Improve documentation for pages of SQL.
[ https://issues.apache.org/jira/browse/FLINK-23028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther updated FLINK-23028: - Component/s: Table SQL / API > Improve documentation for pages of SQL. > --- > > Key: FLINK-23028 > URL: https://issues.apache.org/jira/browse/FLINK-23028 > Project: Flink > Issue Type: Improvement > Components: Documentation, Table SQL / API >Reporter: Roc Marshal >Priority: Minor > Labels: pull-request-available > > Wrong style in > [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/table/sql/create/#create-function] > section. > Wrong style in > [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/table/sql/drop/#drop-function] > section. > Wrong style in > [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/table/sql/alter/#alter-function] > section. > Add the description about drop catalog in the page of > [https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/drop/] > . > 'catloag' of > [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/table/sql/use/#use-catloag] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-21831) Add Changelog state for timers (PQ)
[ https://issues.apache.org/jira/browse/FLINK-21831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li reassigned FLINK-21831: - Assignee: Piotr Nowojski > Add Changelog state for timers (PQ) > > > Key: FLINK-21831 > URL: https://issues.apache.org/jira/browse/FLINK-21831 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Roman Khachatryan >Assignee: Piotr Nowojski >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23035) Add explicit method to StateChangelogWriter to write metadata
Roman Khachatryan created FLINK-23035: - Summary: Add explicit method to StateChangelogWriter to write metadata Key: FLINK-23035 URL: https://issues.apache.org/jira/browse/FLINK-23035 Project: Flink Issue Type: Sub-task Components: Runtime / State Backends Reporter: Roman Khachatryan Fix For: 1.14.0 Currently, metadata is written to the state changelog using the same StateChangelogWriter.append() method as data. However, it doesn't belong to a specific group, and should be read first on recovery. Because of that, -1 is used. An explicit append() without keygroup would be less fragile (probably still using -1 under the hood). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-11569) Row type does not serialize in to readable format when invoke "toString" method
[ https://issues.apache.org/jira/browse/FLINK-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther closed FLINK-11569. Resolution: Duplicate Fixed as part of FLINK-18090. > Row type does not serialize in to readable format when invoke "toString" > method > --- > > Key: FLINK-11569 > URL: https://issues.apache.org/jira/browse/FLINK-11569 > Project: Flink > Issue Type: Bug > Components: API / Type Serialization System >Reporter: Rong Rong >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Seems like the "toString" method for Row type is only concatenating all > fields using COMMA ",". However it does not wrap the entire Row in some type > of encapsulation, for example "()". This results in nested Row being > serialized as if they are all in one level. > For example: > {code:java} > Row.of("a", 1, Row.of("b", 2)) > {code} > is printed out as > {code:java} > "a",1,"b",2 > {code} > Problematic piece of code can be found here: > [https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java#L87] > New changes should be simple to have a dedicated wrapper for the "row" > stringify format, something like: > {code:java} > ("a",1,("b",2)) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Issue Comment Deleted] (FLINK-11569) Row type does not serialize in to readable format when invoke "toString" method
[ https://issues.apache.org/jira/browse/FLINK-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther updated FLINK-11569: - Comment: was deleted (was: This issue was labeled "stale-major" 7 ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. ) > Row type does not serialize in to readable format when invoke "toString" > method > --- > > Key: FLINK-11569 > URL: https://issues.apache.org/jira/browse/FLINK-11569 > Project: Flink > Issue Type: Bug > Components: API / Type Serialization System >Reporter: Rong Rong >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Seems like the "toString" method for Row type is only concatenating all > fields using COMMA ",". However it does not wrap the entire Row in some type > of encapsulation, for example "()". This results in nested Row being > serialized as if they are all in one level. > For example: > {code:java} > Row.of("a", 1, Row.of("b", 2)) > {code} > is printed out as > {code:java} > "a",1,"b",2 > {code} > Problematic piece of code can be found here: > [https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java#L87] > New changes should be simple to have a dedicated wrapper for the "row" > stringify format, something like: > {code:java} > ("a",1,("b",2)) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23037) Document limitations of UC
Dawid Wysakowicz created FLINK-23037: Summary: Document limitations of UC Key: FLINK-23037 URL: https://issues.apache.org/jira/browse/FLINK-23037 Project: Flink Issue Type: Improvement Components: Documentation, Runtime / Checkpointing Reporter: Dawid Wysakowicz Assignee: Dawid Wysakowicz Fix For: 1.14.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23036) org.apache.flink.util.FlinkRuntimeException: Error while adding data to RocksDB
guxiang created FLINK-23036: --- Summary: org.apache.flink.util.FlinkRuntimeException: Error while adding data to RocksDB Key: FLINK-23036 URL: https://issues.apache.org/jira/browse/FLINK-23036 Project: Flink Issue Type: Bug Components: API / Core Affects Versions: 1.13.1, 1.12.2, 1.9.4 Reporter: guxiang I had a similar problem to this one {quote}[http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpoint-failed-because-of-TimeWindow-cannot-be-cast-to-VoidNamespace-td36310.html] https://issues.apache.org/jira/browse/FLINK-18464 {quote} I tested three versions of Flink, and all of them experienced this problem 1.9.0 / 1.12.2 / 1.13.1 This is because I defined a state (VoidNamespace) on a trigger and I used the state in TimeWindow (WindowNamespace). Runs locally and does not use backendState. When you run locally and do not use backendstate, the program runs normally. However, when backendState is used, an error will be reported {code:java} //代码占位符 org.apache.flink.util.FlinkRuntimeException: Error while adding data to RocksDB at org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:109) at com.sankuai.grocery.crm.data.mallorg.trigger.MessageProcessOnTimeStateTrigger.onEventTime(MessageProcessOnTimeStateTrigger.java:116) at com.sankuai.grocery.crm.data.mallorg.trigger.MessageProcessOnTimeStateTrigger.onEventTime(MessageProcessOnTimeStateTrigger.java:23) at org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:944) at org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:481) at org.apache.flink.streaming.api.operators.InternalTimerServiceImpl.advanceWatermark(InternalTimerServiceImpl.java:302) at org.apache.flink.streaming.api.operators.InternalTimeServiceManagerImpl.advanceWatermark(InternalTimeServiceManagerImpl.java:194) at org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:626) at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitWatermark(OneInputStreamTask.java:197) at org.apache.flink.streaming.runtime.streamstatus.StatusWatermarkValve.findAndOutputNewMinWatermarkAcrossAlignedChannels(StatusWatermarkValve.java:196) at org.apache.flink.streaming.runtime.streamstatus.StatusWatermarkValve.inputWatermark(StatusWatermarkValve.java:105) at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:206) at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:174) at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:396) at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191) at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:617) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:581) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.ClassCastException: org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to org.apache.flink.runtime.state.VoidNamespace at org.apache.flink.runtime.state.VoidNamespaceSerializer.serialize(VoidNamespaceSerializer.java:30) at org.apache.flink.contrib.streaming.state.RocksDBKeySerializationUtils.writeNameSpace(RocksDBKeySerializationUtils.java:78) at org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.serializeNamespace(RocksDBSerializedCompositeKeyBuilder.java:175) at org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.buildCompositeKeyNamespace(RocksDBSerializedCompositeKeyBuilder.java:112) at org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeCurrentKeyWithGroupAndNamespace(AbstractRocksDBState.java:163) at org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:106) ... 20 more {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Issue Comment Deleted] (FLINK-11569) Row type does not serialize in to readable format when invoke "toString" method
[ https://issues.apache.org/jira/browse/FLINK-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther updated FLINK-11569: - Comment: was deleted (was: This issue was marked "stale-assigned" and has not received an update in 7 days. It is now automatically unassigned. If you are still working on it, you can assign it to yourself again. Please also give an update about the status of the work.) > Row type does not serialize in to readable format when invoke "toString" > method > --- > > Key: FLINK-11569 > URL: https://issues.apache.org/jira/browse/FLINK-11569 > Project: Flink > Issue Type: Bug > Components: API / Type Serialization System >Reporter: Rong Rong >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Seems like the "toString" method for Row type is only concatenating all > fields using COMMA ",". However it does not wrap the entire Row in some type > of encapsulation, for example "()". This results in nested Row being > serialized as if they are all in one level. > For example: > {code:java} > Row.of("a", 1, Row.of("b", 2)) > {code} > is printed out as > {code:java} > "a",1,"b",2 > {code} > Problematic piece of code can be found here: > [https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java#L87] > New changes should be simple to have a dedicated wrapper for the "row" > stringify format, something like: > {code:java} > ("a",1,("b",2)) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23037) Document limitations of UC
[ https://issues.apache.org/jira/browse/FLINK-23037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Wysakowicz updated FLINK-23037: - Description: The goal of the task is to document limitations of UC and make it better discoverable. We should include documentation for: * connections that UCs are disabled for * interplay with watermarks * concurrent checkpoints > Document limitations of UC > -- > > Key: FLINK-23037 > URL: https://issues.apache.org/jira/browse/FLINK-23037 > Project: Flink > Issue Type: Improvement > Components: Documentation, Runtime / Checkpointing >Reporter: Dawid Wysakowicz >Assignee: Dawid Wysakowicz >Priority: Major > Fix For: 1.14.0 > > > The goal of the task is to document limitations of UC and make it better > discoverable. We should include documentation for: > * connections that UCs are disabled for > * interplay with watermarks > * concurrent checkpoints -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19830) Properly implements processing-time temporal table join
[ https://issues.apache.org/jira/browse/FLINK-19830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365485#comment-17365485 ] Timo Walther commented on FLINK-19830: -- I was also approached by a customer why we don't support this anymore. Do we at least support the old temporal table function as a workaround? > Properly implements processing-time temporal table join > --- > > Key: FLINK-19830 > URL: https://issues.apache.org/jira/browse/FLINK-19830 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Runtime >Reporter: Leonard Xu >Priority: Major > Fix For: 1.14.0 > > > The exsiting TemporalProcessTimeJoinOperator has already supported temporal > table join. > However, the semantic of this implementation is problematic, because the join > processing for left stream doesn't wait for the complete snapshot of temporal > table, this may mislead users in production environment. > Under the processing time temporal join semantics, to get the complete > snapshot of temporal table may need introduce new mechanism in FLINK SQL in > the future. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Issue Comment Deleted] (FLINK-11569) Row type does not serialize in to readable format when invoke "toString" method
[ https://issues.apache.org/jira/browse/FLINK-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther updated FLINK-11569: - Comment: was deleted (was: I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Major but is unassigned and neither itself nor its Sub-Tasks have been updated for 30 days. I have gone ahead and added a "stale-major" to the issue". If this ticket is a Major, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. ) > Row type does not serialize in to readable format when invoke "toString" > method > --- > > Key: FLINK-11569 > URL: https://issues.apache.org/jira/browse/FLINK-11569 > Project: Flink > Issue Type: Bug > Components: API / Type Serialization System >Reporter: Rong Rong >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Seems like the "toString" method for Row type is only concatenating all > fields using COMMA ",". However it does not wrap the entire Row in some type > of encapsulation, for example "()". This results in nested Row being > serialized as if they are all in one level. > For example: > {code:java} > Row.of("a", 1, Row.of("b", 2)) > {code} > is printed out as > {code:java} > "a",1,"b",2 > {code} > Problematic piece of code can be found here: > [https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java#L87] > New changes should be simple to have a dedicated wrapper for the "row" > stringify format, something like: > {code:java} > ("a",1,("b",2)) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Issue Comment Deleted] (FLINK-11569) Row type does not serialize in to readable format when invoke "toString" method
[ https://issues.apache.org/jira/browse/FLINK-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther updated FLINK-11569: - Comment: was deleted (was: This issue is assigned but has not received an update in 7 days so it has been labeled "stale-assigned". If you are still working on the issue, please give an update and remove the label. If you are no longer working on the issue, please unassign so someone else may work on it. In 7 days the issue will be automatically unassigned.) > Row type does not serialize in to readable format when invoke "toString" > method > --- > > Key: FLINK-11569 > URL: https://issues.apache.org/jira/browse/FLINK-11569 > Project: Flink > Issue Type: Bug > Components: API / Type Serialization System >Reporter: Rong Rong >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Seems like the "toString" method for Row type is only concatenating all > fields using COMMA ",". However it does not wrap the entire Row in some type > of encapsulation, for example "()". This results in nested Row being > serialized as if they are all in one level. > For example: > {code:java} > Row.of("a", 1, Row.of("b", 2)) > {code} > is printed out as > {code:java} > "a",1,"b",2 > {code} > Problematic piece of code can be found here: > [https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java#L87] > New changes should be simple to have a dedicated wrapper for the "row" > stringify format, something like: > {code:java} > ("a",1,("b",2)) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (FLINK-18464) ClassCastException during namespace serialization for checkpoint (Heap)
[ https://issues.apache.org/jira/browse/FLINK-18464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Khachatryan reopened FLINK-18464: --- Thanks for reporting, reopening the ticket. > ClassCastException during namespace serialization for checkpoint (Heap) > --- > > Key: FLINK-18464 > URL: https://issues.apache.org/jira/browse/FLINK-18464 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / Task >Affects Versions: 1.9.3 >Reporter: Roman Khachatryan >Priority: Major > > From > [thread|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpoint-failed-because-of-TimeWindow-cannot-be-cast-to-VoidNamespace-td36310.html] > {quote}I'm using flink 1.9 on Mesos and I try to use my own trigger and > evictor. The state is stored to memory. > {quote} > > > {code:java} > input.setParallelism(processParallelism) > .assignTimestampsAndWatermarks(new UETimeAssigner) > .keyBy(_.key) > .window(TumblingEventTimeWindows.of(Time.minutes(20))) > .trigger(new MyTrigger) > .evictor(new MyEvictor) > .process(new MyFunction).setParallelism(aggregateParallelism) > .addSink(kafkaSink).setParallelism(sinkParallelism) > .name("kafka-record-sink"){code} > > > {code:java} > java.lang.Exception: Could not materialize checkpoint 1 for operator > Window(TumblingEventTimeWindows(120), JoinTrigger, JoinEvictor, > ScalaProcessWindowFunctionWrapper) -> Sink: kafka-record-sink (2/5). > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:1100) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1042) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.util.concurrent.ExecutionException: > java.lang.ClassCastException: > org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to > org.apache.flink.runtime.state.VoidNamespace > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:450) > at > org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:47) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1011) > > ... 3 more > Caused by: java.lang.ClassCastException: > org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to > org.apache.flink.runtime.state.VoidNamespace > at > org.apache.flink.runtime.state.VoidNamespaceSerializer.serialize(VoidNamespaceSerializer.java:32) > at > org.apache.flink.runtime.state.heap.CopyOnWriteStateMapSnapshot.writeState(CopyOnWriteStateMapSnapshot.java:114) > at > org.apache.flink.runtime.state.heap.AbstractStateTableSnapshot.writeStateInKeyGroup(AbstractStateTableSnapshot.java:121) > at > org.apache.flink.runtime.state.heap.CopyOnWriteStateTableSnapshot.writeStateInKeyGroup(CopyOnWriteStateTableSnapshot.java:37) > at > org.apache.flink.runtime.state.heap.HeapSnapshotStrategy$1.callInternal(HeapSnapshotStrategy.java:191) > at > org.apache.flink.runtime.state.heap.HeapSnapshotStrategy$1.callInternal(HeapSnapshotStrategy.java:158) > at > org.apache.flink.runtime.state.AsyncSnapshotCallable.call(AsyncSnapshotCallable.java:75) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:447) > > ... 5 more > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18464) ClassCastException during namespace serialization for checkpoint (Heap and RocksDB)
[ https://issues.apache.org/jira/browse/FLINK-18464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Khachatryan updated FLINK-18464: -- Summary: ClassCastException during namespace serialization for checkpoint (Heap and RocksDB) (was: ClassCastException during namespace serialization for checkpoint (Heap)) > ClassCastException during namespace serialization for checkpoint (Heap and > RocksDB) > --- > > Key: FLINK-18464 > URL: https://issues.apache.org/jira/browse/FLINK-18464 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / Task >Affects Versions: 1.9.3 >Reporter: Roman Khachatryan >Priority: Major > > From > [thread|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpoint-failed-because-of-TimeWindow-cannot-be-cast-to-VoidNamespace-td36310.html] > {quote}I'm using flink 1.9 on Mesos and I try to use my own trigger and > evictor. The state is stored to memory. > {quote} > > > {code:java} > input.setParallelism(processParallelism) > .assignTimestampsAndWatermarks(new UETimeAssigner) > .keyBy(_.key) > .window(TumblingEventTimeWindows.of(Time.minutes(20))) > .trigger(new MyTrigger) > .evictor(new MyEvictor) > .process(new MyFunction).setParallelism(aggregateParallelism) > .addSink(kafkaSink).setParallelism(sinkParallelism) > .name("kafka-record-sink"){code} > > > {code:java} > java.lang.Exception: Could not materialize checkpoint 1 for operator > Window(TumblingEventTimeWindows(120), JoinTrigger, JoinEvictor, > ScalaProcessWindowFunctionWrapper) -> Sink: kafka-record-sink (2/5). > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:1100) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1042) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.util.concurrent.ExecutionException: > java.lang.ClassCastException: > org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to > org.apache.flink.runtime.state.VoidNamespace > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:450) > at > org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:47) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1011) > > ... 3 more > Caused by: java.lang.ClassCastException: > org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to > org.apache.flink.runtime.state.VoidNamespace > at > org.apache.flink.runtime.state.VoidNamespaceSerializer.serialize(VoidNamespaceSerializer.java:32) > at > org.apache.flink.runtime.state.heap.CopyOnWriteStateMapSnapshot.writeState(CopyOnWriteStateMapSnapshot.java:114) > at > org.apache.flink.runtime.state.heap.AbstractStateTableSnapshot.writeStateInKeyGroup(AbstractStateTableSnapshot.java:121) > at > org.apache.flink.runtime.state.heap.CopyOnWriteStateTableSnapshot.writeStateInKeyGroup(CopyOnWriteStateTableSnapshot.java:37) > at > org.apache.flink.runtime.state.heap.HeapSnapshotStrategy$1.callInternal(HeapSnapshotStrategy.java:191) > at > org.apache.flink.runtime.state.heap.HeapSnapshotStrategy$1.callInternal(HeapSnapshotStrategy.java:158) > at > org.apache.flink.runtime.state.AsyncSnapshotCallable.call(AsyncSnapshotCallable.java:75) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:447) > > ... 5 more > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18464) ClassCastException during namespace serialization for checkpoint (Heap and RocksDB)
[ https://issues.apache.org/jira/browse/FLINK-18464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Khachatryan updated FLINK-18464: -- Component/s: (was: Runtime / Task) > ClassCastException during namespace serialization for checkpoint (Heap and > RocksDB) > --- > > Key: FLINK-18464 > URL: https://issues.apache.org/jira/browse/FLINK-18464 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / State Backends >Affects Versions: 1.9.3, 1.13.1 >Reporter: Roman Khachatryan >Priority: Major > > From > [thread|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpoint-failed-because-of-TimeWindow-cannot-be-cast-to-VoidNamespace-td36310.html] > {quote}I'm using flink 1.9 on Mesos and I try to use my own trigger and > evictor. The state is stored to memory. > {quote} > > > {code:java} > input.setParallelism(processParallelism) > .assignTimestampsAndWatermarks(new UETimeAssigner) > .keyBy(_.key) > .window(TumblingEventTimeWindows.of(Time.minutes(20))) > .trigger(new MyTrigger) > .evictor(new MyEvictor) > .process(new MyFunction).setParallelism(aggregateParallelism) > .addSink(kafkaSink).setParallelism(sinkParallelism) > .name("kafka-record-sink"){code} > > > {code:java} > java.lang.Exception: Could not materialize checkpoint 1 for operator > Window(TumblingEventTimeWindows(120), JoinTrigger, JoinEvictor, > ScalaProcessWindowFunctionWrapper) -> Sink: kafka-record-sink (2/5). > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:1100) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1042) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.util.concurrent.ExecutionException: > java.lang.ClassCastException: > org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to > org.apache.flink.runtime.state.VoidNamespace > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:450) > at > org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:47) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1011) > > ... 3 more > Caused by: java.lang.ClassCastException: > org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to > org.apache.flink.runtime.state.VoidNamespace > at > org.apache.flink.runtime.state.VoidNamespaceSerializer.serialize(VoidNamespaceSerializer.java:32) > at > org.apache.flink.runtime.state.heap.CopyOnWriteStateMapSnapshot.writeState(CopyOnWriteStateMapSnapshot.java:114) > at > org.apache.flink.runtime.state.heap.AbstractStateTableSnapshot.writeStateInKeyGroup(AbstractStateTableSnapshot.java:121) > at > org.apache.flink.runtime.state.heap.CopyOnWriteStateTableSnapshot.writeStateInKeyGroup(CopyOnWriteStateTableSnapshot.java:37) > at > org.apache.flink.runtime.state.heap.HeapSnapshotStrategy$1.callInternal(HeapSnapshotStrategy.java:191) > at > org.apache.flink.runtime.state.heap.HeapSnapshotStrategy$1.callInternal(HeapSnapshotStrategy.java:158) > at > org.apache.flink.runtime.state.AsyncSnapshotCallable.call(AsyncSnapshotCallable.java:75) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:447) > > ... 5 more > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18464) ClassCastException during namespace serialization for checkpoint (Heap and RocksDB)
[ https://issues.apache.org/jira/browse/FLINK-18464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Khachatryan updated FLINK-18464: -- Component/s: Runtime / State Backends > ClassCastException during namespace serialization for checkpoint (Heap and > RocksDB) > --- > > Key: FLINK-18464 > URL: https://issues.apache.org/jira/browse/FLINK-18464 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / State Backends, > Runtime / Task >Affects Versions: 1.9.3, 1.13.1 >Reporter: Roman Khachatryan >Priority: Major > > From > [thread|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpoint-failed-because-of-TimeWindow-cannot-be-cast-to-VoidNamespace-td36310.html] > {quote}I'm using flink 1.9 on Mesos and I try to use my own trigger and > evictor. The state is stored to memory. > {quote} > > > {code:java} > input.setParallelism(processParallelism) > .assignTimestampsAndWatermarks(new UETimeAssigner) > .keyBy(_.key) > .window(TumblingEventTimeWindows.of(Time.minutes(20))) > .trigger(new MyTrigger) > .evictor(new MyEvictor) > .process(new MyFunction).setParallelism(aggregateParallelism) > .addSink(kafkaSink).setParallelism(sinkParallelism) > .name("kafka-record-sink"){code} > > > {code:java} > java.lang.Exception: Could not materialize checkpoint 1 for operator > Window(TumblingEventTimeWindows(120), JoinTrigger, JoinEvictor, > ScalaProcessWindowFunctionWrapper) -> Sink: kafka-record-sink (2/5). > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:1100) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1042) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.util.concurrent.ExecutionException: > java.lang.ClassCastException: > org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to > org.apache.flink.runtime.state.VoidNamespace > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:450) > at > org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:47) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1011) > > ... 3 more > Caused by: java.lang.ClassCastException: > org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to > org.apache.flink.runtime.state.VoidNamespace > at > org.apache.flink.runtime.state.VoidNamespaceSerializer.serialize(VoidNamespaceSerializer.java:32) > at > org.apache.flink.runtime.state.heap.CopyOnWriteStateMapSnapshot.writeState(CopyOnWriteStateMapSnapshot.java:114) > at > org.apache.flink.runtime.state.heap.AbstractStateTableSnapshot.writeStateInKeyGroup(AbstractStateTableSnapshot.java:121) > at > org.apache.flink.runtime.state.heap.CopyOnWriteStateTableSnapshot.writeStateInKeyGroup(CopyOnWriteStateTableSnapshot.java:37) > at > org.apache.flink.runtime.state.heap.HeapSnapshotStrategy$1.callInternal(HeapSnapshotStrategy.java:191) > at > org.apache.flink.runtime.state.heap.HeapSnapshotStrategy$1.callInternal(HeapSnapshotStrategy.java:158) > at > org.apache.flink.runtime.state.AsyncSnapshotCallable.call(AsyncSnapshotCallable.java:75) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:447) > > ... 5 more > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18464) ClassCastException during namespace serialization for checkpoint (Heap and RocksDB)
[ https://issues.apache.org/jira/browse/FLINK-18464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Khachatryan updated FLINK-18464: -- Affects Version/s: 1.13.1 > ClassCastException during namespace serialization for checkpoint (Heap and > RocksDB) > --- > > Key: FLINK-18464 > URL: https://issues.apache.org/jira/browse/FLINK-18464 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / Task >Affects Versions: 1.9.3, 1.13.1 >Reporter: Roman Khachatryan >Priority: Major > > From > [thread|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpoint-failed-because-of-TimeWindow-cannot-be-cast-to-VoidNamespace-td36310.html] > {quote}I'm using flink 1.9 on Mesos and I try to use my own trigger and > evictor. The state is stored to memory. > {quote} > > > {code:java} > input.setParallelism(processParallelism) > .assignTimestampsAndWatermarks(new UETimeAssigner) > .keyBy(_.key) > .window(TumblingEventTimeWindows.of(Time.minutes(20))) > .trigger(new MyTrigger) > .evictor(new MyEvictor) > .process(new MyFunction).setParallelism(aggregateParallelism) > .addSink(kafkaSink).setParallelism(sinkParallelism) > .name("kafka-record-sink"){code} > > > {code:java} > java.lang.Exception: Could not materialize checkpoint 1 for operator > Window(TumblingEventTimeWindows(120), JoinTrigger, JoinEvictor, > ScalaProcessWindowFunctionWrapper) -> Sink: kafka-record-sink (2/5). > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:1100) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1042) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.util.concurrent.ExecutionException: > java.lang.ClassCastException: > org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to > org.apache.flink.runtime.state.VoidNamespace > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:450) > at > org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:47) > at > org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1011) > > ... 3 more > Caused by: java.lang.ClassCastException: > org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to > org.apache.flink.runtime.state.VoidNamespace > at > org.apache.flink.runtime.state.VoidNamespaceSerializer.serialize(VoidNamespaceSerializer.java:32) > at > org.apache.flink.runtime.state.heap.CopyOnWriteStateMapSnapshot.writeState(CopyOnWriteStateMapSnapshot.java:114) > at > org.apache.flink.runtime.state.heap.AbstractStateTableSnapshot.writeStateInKeyGroup(AbstractStateTableSnapshot.java:121) > at > org.apache.flink.runtime.state.heap.CopyOnWriteStateTableSnapshot.writeStateInKeyGroup(CopyOnWriteStateTableSnapshot.java:37) > at > org.apache.flink.runtime.state.heap.HeapSnapshotStrategy$1.callInternal(HeapSnapshotStrategy.java:191) > at > org.apache.flink.runtime.state.heap.HeapSnapshotStrategy$1.callInternal(HeapSnapshotStrategy.java:158) > at > org.apache.flink.runtime.state.AsyncSnapshotCallable.call(AsyncSnapshotCallable.java:75) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:447) > > ... 5 more > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-22946) Network buffer deadlock introduced by unaligned checkpoint
[ https://issues.apache.org/jira/browse/FLINK-22946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski closed FLINK-22946. -- Resolution: Fixed Merged to master as 903de19442b Merged to release-1.13 as 59322aef23c Merged to release-1.12 as 949db6fdf7b Merged to release-1.11 as aa7baa5bd9d > Network buffer deadlock introduced by unaligned checkpoint > -- > > Key: FLINK-22946 > URL: https://issues.apache.org/jira/browse/FLINK-22946 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.13.0, 1.13.1 >Reporter: Guokuai Huang >Assignee: Guokuai Huang >Priority: Critical > Labels: pull-request-available > Fix For: 1.14.0, 1.13.2 > > Attachments: Screen Shot 2021-06-09 at 6.39.47 PM.png, Screen Shot > 2021-06-09 at 7.02.04 PM.png > > > We recently encountered deadlock when using unaligned checkpoint. Below are > two thread stacks that cause deadlock: > {code:java} > "Channel state writer Join(xx) (34/256)#1": at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager.notifyBufferAvailable(BufferManager.java:296) > - waiting to lock <0x0007296dfa90> (a > org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.fireBufferAvailableNotification(LocalBufferPool.java:507) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:494) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:460) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156) > at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue.addExclusiveBuffer(BufferManager.java:399) > at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager.recycle(BufferManager.java:200) > - locked <0x0007296bc450> (a > org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateCheckpointWriter.write(ChannelStateCheckpointWriter.java:173) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateCheckpointWriter.writeInput(ChannelStateCheckpointWriter.java:131) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest.lambda$write$0(ChannelStateWriteRequest.java:63) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest$$Lambda$785/722492780.accept(Unknown > Source) at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest.lambda$buildWriteRequest$2(ChannelStateWriteRequest.java:93) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest$$Lambda$786/1360749026.accept(Unknown > Source) at > org.apache.flink.runtime.checkpoint.channel.CheckpointInProgressRequest.execute(ChannelStateWriteRequest.java:212) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestDispatcherImpl.dispatchInternal(ChannelStateWriteRequestDispatcherImpl.java:82) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestDispatcherImpl.dispatch(ChannelStateWriteRequestDispatcherImpl.java:59) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.loop(ChannelStateWriteRequestExecutorImpl.java:96) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.run(ChannelStateWriteRequestExecutorImpl.java:75) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl$$Lambda$253/502209879.run(Unknown > Source) at java.lang.Thread.run(Thread.java:745){code} > {code:java} > "Join(xx) (34/256)#1": at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager.notifyBufferAvailable(BufferManager.ja
[jira] [Updated] (FLINK-22946) Network buffer deadlock introduced by unaligned checkpoint
[ https://issues.apache.org/jira/browse/FLINK-22946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski updated FLINK-22946: --- Fix Version/s: 1.12.5 1.11.4 > Network buffer deadlock introduced by unaligned checkpoint > -- > > Key: FLINK-22946 > URL: https://issues.apache.org/jira/browse/FLINK-22946 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.13.0, 1.13.1 >Reporter: Guokuai Huang >Assignee: Guokuai Huang >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.4, 1.14.0, 1.12.5, 1.13.2 > > Attachments: Screen Shot 2021-06-09 at 6.39.47 PM.png, Screen Shot > 2021-06-09 at 7.02.04 PM.png > > > We recently encountered deadlock when using unaligned checkpoint. Below are > two thread stacks that cause deadlock: > {code:java} > "Channel state writer Join(xx) (34/256)#1": at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager.notifyBufferAvailable(BufferManager.java:296) > - waiting to lock <0x0007296dfa90> (a > org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.fireBufferAvailableNotification(LocalBufferPool.java:507) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:494) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:460) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156) > at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue.addExclusiveBuffer(BufferManager.java:399) > at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager.recycle(BufferManager.java:200) > - locked <0x0007296bc450> (a > org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateCheckpointWriter.write(ChannelStateCheckpointWriter.java:173) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateCheckpointWriter.writeInput(ChannelStateCheckpointWriter.java:131) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest.lambda$write$0(ChannelStateWriteRequest.java:63) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest$$Lambda$785/722492780.accept(Unknown > Source) at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest.lambda$buildWriteRequest$2(ChannelStateWriteRequest.java:93) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest$$Lambda$786/1360749026.accept(Unknown > Source) at > org.apache.flink.runtime.checkpoint.channel.CheckpointInProgressRequest.execute(ChannelStateWriteRequest.java:212) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestDispatcherImpl.dispatchInternal(ChannelStateWriteRequestDispatcherImpl.java:82) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestDispatcherImpl.dispatch(ChannelStateWriteRequestDispatcherImpl.java:59) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.loop(ChannelStateWriteRequestExecutorImpl.java:96) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.run(ChannelStateWriteRequestExecutorImpl.java:75) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl$$Lambda$253/502209879.run(Unknown > Source) at java.lang.Thread.run(Thread.java:745){code} > {code:java} > "Join(xx) (34/256)#1": at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager.notifyBufferAvailable(BufferManager.java:296) > - waiting to lock <0x0007296bc450> (a > org.apache.flink.runtime.io.network.partiti
[jira] [Updated] (FLINK-22946) Network buffer deadlock introduced by unaligned checkpoint
[ https://issues.apache.org/jira/browse/FLINK-22946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski updated FLINK-22946: --- Affects Version/s: (was: 1.13.0) 1.11.3 1.12.4 > Network buffer deadlock introduced by unaligned checkpoint > -- > > Key: FLINK-22946 > URL: https://issues.apache.org/jira/browse/FLINK-22946 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.11.3, 1.13.1, 1.12.4 >Reporter: Guokuai Huang >Assignee: Guokuai Huang >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.4, 1.14.0, 1.12.5, 1.13.2 > > Attachments: Screen Shot 2021-06-09 at 6.39.47 PM.png, Screen Shot > 2021-06-09 at 7.02.04 PM.png > > > We recently encountered deadlock when using unaligned checkpoint. Below are > two thread stacks that cause deadlock: > {code:java} > "Channel state writer Join(xx) (34/256)#1": at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager.notifyBufferAvailable(BufferManager.java:296) > - waiting to lock <0x0007296dfa90> (a > org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.fireBufferAvailableNotification(LocalBufferPool.java:507) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:494) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:460) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156) > at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue.addExclusiveBuffer(BufferManager.java:399) > at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager.recycle(BufferManager.java:200) > - locked <0x0007296bc450> (a > org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110) > at > org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100) > at > org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateCheckpointWriter.write(ChannelStateCheckpointWriter.java:173) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateCheckpointWriter.writeInput(ChannelStateCheckpointWriter.java:131) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest.lambda$write$0(ChannelStateWriteRequest.java:63) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest$$Lambda$785/722492780.accept(Unknown > Source) at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest.lambda$buildWriteRequest$2(ChannelStateWriteRequest.java:93) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest$$Lambda$786/1360749026.accept(Unknown > Source) at > org.apache.flink.runtime.checkpoint.channel.CheckpointInProgressRequest.execute(ChannelStateWriteRequest.java:212) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestDispatcherImpl.dispatchInternal(ChannelStateWriteRequestDispatcherImpl.java:82) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestDispatcherImpl.dispatch(ChannelStateWriteRequestDispatcherImpl.java:59) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.loop(ChannelStateWriteRequestExecutorImpl.java:96) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.run(ChannelStateWriteRequestExecutorImpl.java:75) > at > org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl$$Lambda$253/502209879.run(Unknown > Source) at java.lang.Thread.run(Thread.java:745){code} > {code:java} > "Join(xx) (34/256)#1": at > org.apache.flink.runtime.io.network.partition.consumer.BufferManager.notifyBufferAvailable(BufferManager.java:296) > - waiting to lock <0x000729
[jira] [Commented] (FLINK-19830) Properly implements processing-time temporal table join
[ https://issues.apache.org/jira/browse/FLINK-19830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365491#comment-17365491 ] Jark Wu commented on FLINK-19830: - [~twalthr] IIRC, the old temporal table function still can work in the latest version. We just didn't support this feature for the new temporal join syntax. > Properly implements processing-time temporal table join > --- > > Key: FLINK-19830 > URL: https://issues.apache.org/jira/browse/FLINK-19830 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Runtime >Reporter: Leonard Xu >Priority: Major > Fix For: 1.14.0 > > > The exsiting TemporalProcessTimeJoinOperator has already supported temporal > table join. > However, the semantic of this implementation is problematic, because the join > processing for left stream doesn't wait for the complete snapshot of temporal > table, this may mislead users in production environment. > Under the processing time temporal join semantics, to get the complete > snapshot of temporal table may need introduce new mechanism in FLINK SQL in > the future. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-16090) Translate "Table API" page of "Table API & SQL" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-16090: --- Assignee: ji yinglong > Translate "Table API" page of "Table API & SQL" into Chinese > - > > Key: FLINK-16090 > URL: https://issues.apache.org/jira/browse/FLINK-16090 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Jark Wu >Assignee: ji yinglong >Priority: Major > Labels: auto-unassigned > > The page url is > https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/tableApi.html > The markdown file is located in {{flink/docs/dev/table/tableApi.zh.md}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22698) RabbitMQ source does not stop unless message arrives in queue
[ https://issues.apache.org/jira/browse/FLINK-22698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fabian Paul updated FLINK-22698: Affects Version/s: 1.13.1 1.12.4 > RabbitMQ source does not stop unless message arrives in queue > - > > Key: FLINK-22698 > URL: https://issues.apache.org/jira/browse/FLINK-22698 > Project: Flink > Issue Type: Bug > Components: Connectors/ RabbitMQ >Affects Versions: 1.12.0, 1.13.1, 1.12.4 >Reporter: Austin Cawley-Edwards >Assignee: Michał Ciesielczyk >Priority: Major > Labels: pull-request-available > Attachments: taskmanager_thread_dump.json > > > In a streaming job with multiple RMQSources, a stop-with-savepoint request > has unexpected behavior. Regular checkpoints and savepoints complete > successfully, it is only the stop-with-savepoint request where this behavior > is seen. > > *Expected Behavior:* > The stop-with-savepoint request stops the job with a FINISHED state. > > *Actual Behavior:* > The stop-with-savepoint request either times out or hangs indefinitely unless > a message arrives in all the queues that the job consumes from after the > stop-with-savepoint request is made. > > *Current workaround:* > Send a sentinel value to each of the queues consumed by the job that the > deserialization schema checks in its isEndOfStream method. This is cumbersome > and makes it difficult to do stateful upgrades, as coordination with another > system is now necessary. > > > The TaskManager thread dump is attached. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23031) Support to emit window result with periodic or non_periodic
[ https://issues.apache.org/jira/browse/FLINK-23031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365494#comment-17365494 ] Jark Wu commented on FLINK-23031: - [~aitozi] what do you mean about "non_periodic"? We have to register the timer because we don't know whether there will be data coming or not. > Support to emit window result with periodic or non_periodic > --- > > Key: FLINK-23031 > URL: https://issues.apache.org/jira/browse/FLINK-23031 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Reporter: Aitozi >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22698) RabbitMQ source does not stop unless message arrives in queue
[ https://issues.apache.org/jira/browse/FLINK-22698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fabian Paul updated FLINK-22698: Affects Version/s: (was: 1.12.0) 1.12.4 > RabbitMQ source does not stop unless message arrives in queue > - > > Key: FLINK-22698 > URL: https://issues.apache.org/jira/browse/FLINK-22698 > Project: Flink > Issue Type: Bug > Components: Connectors/ RabbitMQ >Affects Versions: 1.13.1, 1.12.4 >Reporter: Austin Cawley-Edwards >Assignee: Michał Ciesielczyk >Priority: Major > Labels: pull-request-available > Attachments: taskmanager_thread_dump.json > > > In a streaming job with multiple RMQSources, a stop-with-savepoint request > has unexpected behavior. Regular checkpoints and savepoints complete > successfully, it is only the stop-with-savepoint request where this behavior > is seen. > > *Expected Behavior:* > The stop-with-savepoint request stops the job with a FINISHED state. > > *Actual Behavior:* > The stop-with-savepoint request either times out or hangs indefinitely unless > a message arrives in all the queues that the job consumes from after the > stop-with-savepoint request is made. > > *Current workaround:* > Send a sentinel value to each of the queues consumed by the job that the > deserialization schema checks in its isEndOfStream method. This is cumbersome > and makes it difficult to do stateful upgrades, as coordination with another > system is now necessary. > > > The TaskManager thread dump is attached. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22698) RabbitMQ source does not stop unless message arrives in queue
[ https://issues.apache.org/jira/browse/FLINK-22698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fabian Paul updated FLINK-22698: Affects Version/s: (was: 1.12.4) > RabbitMQ source does not stop unless message arrives in queue > - > > Key: FLINK-22698 > URL: https://issues.apache.org/jira/browse/FLINK-22698 > Project: Flink > Issue Type: Bug > Components: Connectors/ RabbitMQ >Affects Versions: 1.12.0, 1.13.1 >Reporter: Austin Cawley-Edwards >Assignee: Michał Ciesielczyk >Priority: Major > Labels: pull-request-available > Attachments: taskmanager_thread_dump.json > > > In a streaming job with multiple RMQSources, a stop-with-savepoint request > has unexpected behavior. Regular checkpoints and savepoints complete > successfully, it is only the stop-with-savepoint request where this behavior > is seen. > > *Expected Behavior:* > The stop-with-savepoint request stops the job with a FINISHED state. > > *Actual Behavior:* > The stop-with-savepoint request either times out or hangs indefinitely unless > a message arrives in all the queues that the job consumes from after the > stop-with-savepoint request is made. > > *Current workaround:* > Send a sentinel value to each of the queues consumed by the job that the > deserialization schema checks in its isEndOfStream method. This is cumbersome > and makes it difficult to do stateful upgrades, as coordination with another > system is now necessary. > > > The TaskManager thread dump is attached. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23038) Broken link in Documentation Style Guide
Daisy Tsang created FLINK-23038: --- Summary: Broken link in Documentation Style Guide Key: FLINK-23038 URL: https://issues.apache.org/jira/browse/FLINK-23038 Project: Flink Issue Type: Bug Reporter: Daisy Tsang The link to the Flink Glossary is broken here: https://flink.apache.org/contributing/docs-style.html -- This message was sent by Atlassian Jira (v8.3.4#803005)