We ran into issues using EFS (which under the covers is a NFS like
filesystem)... details are in this post
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/External-checkpoints-not-getting-cleaned-up-discarded-potentially-causing-high-load-tp14073p14106.html
--
Sent from: htt
: user@flink.apache.org
Subject: Re: Using local FS for checkpoint
Hi Marchant,
HDFS is not a must for storing checkpoints. S3 or NFS are all acceptable, as
long as it is accessible from job manager and task manager.
For AWS S3 configuration, you can refer to this page
(https://ci.apache.org/projects
Hi Marchant,
HDFS is not a must for storing checkpoints. S3 or NFS are all acceptable,
as long as it is accessible from job manager and task manager.
For AWS S3 configuration, you can refer to this page (
https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/aws.html).
Best,
Tony Wei
Whether I use RocksDB or FS State backends, if my requirements are to have
fault-tolerance and ability to recover with 'at-least once' semantics for my
Flink job, is there still a valid case for using a backing local FS for storing
states? i.e. If a Flink Node is invalidated, I would have though