Hey Dian,
You are right. What this approach does is to allow one to "queue n
messages", call some async function on each of them and return results.
Which means it will indeed block flink job after it gathers n messages. It
is a down-side indeed. Some optimisations with parallelism, watermarking
e
Hi Dhavan,
Thanks a lot for the sharing. This is very interesting. Just want to add
that this is somewhat different from the asyncio operator supported in
Flink, e.g. you are waiting the results of one element before processing
the next element and so it's actually synchronous from this point of v
Hi, We are using flink 1.14.3. But when the job try to restart from checkPoint,
the following exception accour. What's wrong?
And how can i avoid it?
Caused by: TimerException{com.esotericsoftware.kryo.KryoException:
java.lang.IndexOutOfBoundsException: Index: 99, Size: 9
Serialization trace:
Thank you for the help. To follow up, the issue went away when we reverted
back to flink 1.13. May be related to flink-27481. Before reverting, we
tested unaligned checkpoints with a timeout of 10 minutes, which timed out.
Thanks.
On Thu, Apr 28, 2022, 5:38 PM Guowei Ma wrote:
> Hi Sam
>
> I thi
I should probably clarify that this is intermittent and it is a different
subtask ID each time it does happen.
On Thu, May 5, 2022 at 4:25 PM Ammon Diether wrote:
> Flink Stateful Functions 3.2.0 (Flink 1.14.3)
> All java embedded code.
> Parallelism 32
> Standard Stateful Functions Tasks: rou
Flink Stateful Functions 3.2.0 (Flink 1.14.3)
All java embedded code.
Parallelism 32
Standard Stateful Functions Tasks: router -> functions -> feedback
The Router reads from kinesis and routes to stateful functions. For some
reason, one and only one of the router subtasks will have have a start
Actually what's happening is there's a nightly indexing job. So when we
call the insert it takes longer than the specified checkpoint threshold.
JDBC will hapilly continue waiting for a response from the DB until it's
done. So the checkpoint threshold is reached and the job tries to shut down
and r
I have found the following way to make use of aiohttp via asyncio. It works
alright, and flatmap can be converted to process function to add timers.
ds = env.from_collection(
collection=["a", "b"],
type_info=Types.STRING())
class MyFlatMapFunction(FlatMapFunction):
queue = []
asy
Hi,
In order to unsubscribe, please send an email to
user-unsubscr...@flink.apache.org as documented on the Flink community page
[1].
Best regards,
Martijn Visser
https://twitter.com/MartijnVisser82
https://github.com/MartijnVisser
[1] https://flink.apache.org/community.html
On Thu, 5 May 2022
Hi ChangZhuo,
By low level process function you mean the Data Streams API, right?
How to disable the Kafka metrics when creating a Kafka source / sink is
described here:
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/kafka/#additional-properties
The same pr
10 matches
Mail list logo