Thanks you both for answers.
So I just want to have this right.
I can I achieve HA for Job Cluster Docker config having the zookeeper quorum
configured like mentioned in [1] right (with s3 and zookeeper)?
I assume to modify default Job Cluster config to match the [1] setup.
[1]
https://ci.apach
Team,
Presently I have added elasticsearch as a sink to a stream and inserting the
json data, the problem is when I restore the application in case of crash it
reprocess the data in between (meanwhile a backend application updates the
document in ES) and flink reinsert the document in ES and all u
Thanks David. It worked after adding the jar inside a folder.
On Sat, Feb 1, 2020 at 2:37 AM David Magalhães
wrote:
> Did you put each inside a different folder with their name? Like
> /opt/flink/plugins/s3-fs-presto/flink-s3-fs-presto-1.9.1.jar ?
>
> check
> https://ci.apache.org/projects/flink
Hi there,
I am just exploring the apache flink git repo and found the performance
test. I have already test on my local machine, I’m wondering if we got
online result?
Thanks
Regards
Xu Yan
Just like tison has said, you could use a deployment to restart the
jobmanager pod. However,
if you want to make the all jobs could recover from the checkpoint, you
also need to use the
zookeeper and HDFS/S3 to store the high-availability data.
Also some Kubernetes native HA support is in plan[1].
Hi Krzysztof,
Flink doesn't provide JM HA itself yet.
For YARN deployment, you can rely on yarn.application-attempts
configuration[1];
for Kubernetes deployment, Flink uses Kubernetes deployment to restart a
failed JM.
Though, such standalone mode doesn't tolerate JM failure and strategies
above
Dear community,
happy to share this week's community update. Activity on the dev@ mailing
list has picked up quite a bit this week and more and more concrete design
proposals for Flink 1.11 are brought up for discussion. Besides that, Flink
1.10 and flink-shaded 10.0 are both close to being releas