Possible way to avoid unnecessary serialization calls.

2021-05-09 Thread Alex Drobinsky
Dear entity that represents Flink user community, In order to formulate the question itself, I would need to describe the problem in many details, hence please bear with me for a while. I have following execution graph: KafkaSource -> Message parser -> keyBy -> TCP assembly -> keyBy -> Storage -

Re: how to split a column value into multiple rows in flink sql?

2021-05-09 Thread Yik San Chan
Hi, Maybe try row-based flatMap operation in table api. It is available on flink 1.13.x https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/tableapi/#flatmap Best, Yik San -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: How to recovery from last count when using CUMULATE window after restart flink-sql job?

2021-05-09 Thread Jark Wu
Hi, When restarting a Flink job, Flink will start the job with an empty state, because this is a new job. This is not a special for CUMULATE window, but for all Flink jobs. If you want to restore a Flink job from a state/savepoint, you have to specify the savepoint path, see [1]. Best, Jark [1]:

Re:Re: The problem of getting data by Rest API

2021-05-09 Thread penguin.
Thanks for your reply,so why is the data of Web UI updated every 3S? At 2021-05-07 15:19:50, "Chesnay Schepler" wrote: To be more precise, the update of the data is scheduled at most once every 10 seconds, but it can of course happen that the result of said update arrives in a d