s,
>>> Vishal Sharma
>>>
>>> On Wed, Jun 19, 2019 at 11:11 PM Chesnay Schepler
>>> wrote:
>>>
>>>> The _metadata is always stored in the same directory as the checkpoint
>>>> data.
>>>>
>>>> As outlined
>>
>> On Wed, Jun 19, 2019 at 11:11 PM Chesnay Schepler
>> wrote:
>>
>>> The _metadata is always stored in the same directory as the checkpoint
>>> data.
>>>
>>> As outlined here
>>> <https://ci.apache.org/projects/flink/flink-d
anks,
> Vishal Sharma
>
> On Wed, Jun 19, 2019 at 11:11 PM Chesnay Schepler
> wrote:
>
>> The _metadata is always stored in the same directory as the checkpoint
>> data.
>>
>> As outlined here
>> <https://ci.apache.org/projects/flink/flink-docs-master/ops/
The _metadata is always stored in the same directory as the checkpoint
> data.
>
> As outlined here
> <https://ci.apache.org/projects/flink/flink-docs-master/ops/state/checkpoints.html#directory-structure>
> "state.checkpoints.dir" serves as a cluster-wide configuration
The _metadata is always stored in the same directory as the checkpoint data.
As outlined here
<https://ci.apache.org/projects/flink/flink-docs-master/ops/state/checkpoints.html#directory-structure>
"state.checkpoints.dir" serves as a cluster-wide configuration that
_can_ be ov
Hi Folks,
I am using flink 1.8 with externalised checkpointing enabled and saving the
checkpoints to aws S3.
My configuration is as follows :
flink-conf.yaml :
state.checkpoints.dir: s3a://test-bucket/checkpoint-metadata
In application code :
env.setStateBackend(new
RocksDBStateBackend(&quo
rs should be fine—they don't need a
> strong identity. For intracluster communication, the taskmanager's hostname
> is used by default, which in most Kubernetes setups is resolvable to the Pod
> IP.
>
> state.checkpoints.dir should be configured the same for all jobmanagers and
Hi Felipe,
No, using a Deployment for taskmanagers should be fine—they don't need a
strong identity. For intracluster communication, the taskmanager's hostname
is used by default, which in most Kubernetes setups is resolvable to the
Pod IP.
state.checkpoints.dir should be configured th
Something you could try is loading the GlobalConfiguration singleton
before executing the job and setting
the parameter there.
On 23.01.2018 19:28, Biswajit Das wrote:
Hi Hao ,
Thank you for reply . I was more of trying to find how do I manipulate
when I run locally from IDE .
~ Biswajit
O
Hi Hao ,
Thank you for reply . I was more of trying to find how do I manipulate when
I run locally from IDE .
~ Biswajit
On Mon, Jan 22, 2018 at 12:56 PM, Hao Sun wrote:
> We generate flink.conf on the fly, so we can use different values based on
> environment.
>
> On Mon, Jan 22, 2018 at 12:5
We generate flink.conf on the fly, so we can use different values based on
environment.
On Mon, Jan 22, 2018 at 12:53 PM Biswajit Das wrote:
> Hello ,
>
> Is there any hack to supply *state.checkpoints.*dir as argument or JVM
> parameter when running locally . I can change the source
> *Checkp
Hello ,
Is there any hack to supply *state.checkpoints.*dir as argument or JVM
parameter when running locally . I can change the source
*CheckpointCoordinator* and make it work , trying to find if there is any
shortcuts ??
Thank you
~ Biswajit
Hi,
I'm deploying flink to kubernetes and I've some doubts...
First one is if the task managers should have strong identity (in
which case I will use statefulsets for deploying them). Second one is
if I should point rocksdb state.checkpoint.dir in all task managers to
the same HDFS path or if eac
hat 6123 port was used by
>>> another process. I free the port and restarted the job manager. Now
>>> everything looks fine. The error message is little misleading as the real
>>> cause is that 6123 is already bind but it says that state.checkpoints.dir is
>>> not s
. Now
everything looks fine. The error message is little misleading as the real
cause is that 6123 is already bind but it says that state.checkpoints.dir is
not set.
Thanks
On 19.12.2017 17:55, Ufuk Celebi wrote:
When the JobManager/TaskManager are starting up they log what config
they are loading. Loo
wrote:
> I inspected the log as you suggest and found that 6123 port was used by
> another process. I free the port and restarted the job manager. Now
> everything looks fine. The error message is little misleading as the real
> cause is that 6123 is already bind but it says that state.che
I inspected the log as you suggest and found that 6123 port was used by
another process. I free the port and restarted the job manager. Now
everything looks fine. The error message is little misleading as the
real cause is that 6123 is already bind but it says that
state.checkpoints.dir is not
intCleanup.RETAIN_ON_CANCELLATION);
> checkpointConfig.setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
> env.setStateBackend(new
> FsStateBackend("file:///tmp/flink-checkpoints-data/", true));
>
> in flink-conf.yaml i set:
> state.checkpoints.dir: file:///tmp/flink-checkpoi
ATION);
checkpointConfig.setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
env.setStateBackend(new
FsStateBackend("file:///tmp/flink-checkpoints-data/",true));
in flink-conf.yaml i set:
state.checkpoints.dir: file:///tmp/flink-checkpoints-meta/
but when i run the application i
19 matches
Mail list logo