In my case the problem seems to happen when a streaming job is recovering
with large state. I dont really understand how it could be caused by what
you described as that seems to be affecting batch jobs mostly.
But I can easily be wrong, maybe there are other implications of the above
issue.
Gyul
Hi,
I have two flink jobs, both the jobs are using Flink Kafka Producer and
Flink Kafka Consumer running in Exactly-Once mode.
Parallelism of both the jobs is one.
Both the jobs are same in number of operators and type of operators.
When we start one job then that job runs fine. But as soon as we
Hi Flink community,
I have a custom source that emits an user-defined data type, BaseEvent. The
following code works fine when BaseEvent is not POJO.
But, when I changed it to POJO by adding a default constructor, I'm getting
"Buffer Pool is destroyed" runtime exception on the Collect method.
Hi Gyula,
Your issue is possibly related to [1] that slots prematurely released. I’ve
raised a PR which is still pending review.
[1] https://issues.apache.org/jira/browse/FLINK-10941
> On Dec 20, 2018, at 9:33 PM, Gyula Fóra wrote:
>
> Hi!
>
> Since we have moved to the new execution mode wi
Yeah, I figured that part out. I’ve tried to make it work with 2.7 and 2.8, and
it looks like the prebuilt jars have actually moved to Hadoop 3
From: Paul Lam [mailto:paullin3...@gmail.com]
Sent: Tuesday, December 18, 2018 7:08 PM
To: Martin, Nick
Cc: user@flink.apache.org
Subject: EXT :Re: Cust
Dear community,
this is the weekly community update thread #51. Please post any news and
updates you want to share with the community to this thread.
# Flink Forward China is happening
This week the Flink community meets in Beijing for the first Flink Forward
China which takes place from the 20t
Since you define a 15 second window you have to ensure that your source
generates at least 15 seconds worth of data; otherwise the window will
never fire.
Since you do not use event-time your source has to actually run for at
least 15 seconds; for this case collection sources will simply not wor
Hey guys,
i Incurred in situation and i need you help.
im trying Using unit test inorder to check my results,
first my timeWindow is set for 15sec, but the assertyEquals doesnt wait
for the window getting the answer,
so everything is telling me index 0 out of bounds (cuze its didnt get to
place
Hi!
Since we have moved to the new execution mode with Flink 1.7.0 we have
observed some pretty bad stability issues with the Yarn execution.
It's pretty hard to understand what's going on so sorry for the vague
description but here is what seems to happen:
In some cases when a bigger job fails
How can I easiest use s3 from a Flink job deployed in a session
cluster on kubernetes?
I've tried including the flink-s3-fs-hadoop dependency in the sbt file
for my job, can I programmatically set the properties to point to it?
Is there a ready-made docker image for a flink with s3 dependencies
10 matches
Mail list logo