Not sure I that I understand your statement about "the HaServices are only
being given the JobGraph", seems HighAvailabilityServices#getJobGraphStore
provides JobGraphStore, and potentially implementation of
JobGraphStore#recoverJobGraph(JobID jobId) for this store could build new graph
for jar
The HaServices are only being given the JobGraph, to this is not possible.
Actually I have to correct myself. For a job cluster the state in HA
should be irrelevant when you're submitting another jar.
Flink has no way of knowing that this jar is in any way connected to the
previous job; they wi
Yup! This definitely helps and makes sense.
The 'after' payload comes with all data from the row right? So essentially
inserts and updates I can insert/replace data by pk and null values I just
delete by pk, and then I can build out the rest of my joins like normal.
Are there any performance impl
Manas,
One option you could try is to set the scope in the dependencies as
compile for the required artifacts rather than provided.
Prasanna.
On Fri, Aug 21, 2020 at 1:47 PM Chesnay Schepler wrote:
> If this class cannot be found on the classpath then chances are Flink is
> completely missing
Hi Team,
Following is the pipeline
Kafka => Processing => SNS Topics .
Flink Does not provide a SNS connector out of the box.
a) I implemented the above by using AWS SDK and published the messages in
the Map operator itself.
The pipeline is working well. I see messages flowing to SNS topics.
b)
Is it feasible to override ZooKeeperHaServices to recreate JobGraph from jar
instead of reading it from ZK state. Any hints? I have feeling that reading
JobGraph from jar is more resilient approach, less chances of mistakes during
upgrade
Thanks,
Alexey
From: P
Hi guys,
I use flink version 1.7.2
I have a stateful streaming job which uses a keyed process function. I use
heap state backend. Although I set TM heap size to 16 GB, I get OOM error
when the state size is around 2.5 GB(from dashboard I get the state size).
I have set taskmanager.memory.fraction:
It should be the akka.ask.timeout which is defining the rpc timeout. You
can decrease it, but it might cause other RPCs to fail if you set it too
low.
Cheers,
Till
On Fri, Aug 21, 2020 at 9:45 AM Zhinan Cheng
wrote:
> Hi Till,
>
> Thanks for the reply.
> Is the timeout 10s here always necessary
Hi, Rex.
Part of what enabled CDC support in Flink 1.11 was the refactoring of the
table source interfaces (FLIP-95 [1]), and the new ScanTableSource
[2], which allows to emit bounded/unbounded streams with insert, update and
delete rows.
In theory, you could consume data generated with Debezium
@Jark Would it be possible to use the 1.11 debezium support in 1.10?
On 20/08/2020 19:59, Rex Fenley wrote:
Hi,
I'm trying to set up Flink with Debezium CDC Connector on AWS EMR,
however, EMR only supports Flink 1.10.0, whereas Debezium Connector
arrived in Flink 1.11.0, from looking at the d
If this class cannot be found on the classpath then chances are Flink is
completely missing from the classpath.
I haven't worked with EMR, but my guess is that you did not submit
things correctly.
From the EMR documentation I could gather that the submission should
work without the submitted
Hi Till,
Thanks for the reply.
Is the timeout 10s here always necessary?
Can I reduce this value to reduce the restart time of the job?
I cannot find this term in the configuration of Flink currently.
Regards,
Zhinan
On Fri, 21 Aug 2020 at 15:28, Till Rohrmann wrote:
> You are right. The prob
You are right. The problem is that Flink tries three times to cancel the
call and every RPC call has a timeout of 10s. Since the machine on which
the Task ran has died, it will take that long until the system decides to
fail the Task instead [1].
[1]
https://github.com/apache/flink/blob/master/fli
Hi Vijay,
could you move the s3 filesystem
dependency lib/flink-s3-fs-hadoop-1.10.0.jar into the plugin directory? See
this link [1] for more information. Since Flink 1.10 we have removed the
relocation of filesystem dependencies because the recommended way to load
them is via Flink's plugin mecha
14 matches
Mail list logo