Thanks all.
On Mon, Nov 5, 2018 at 2:05 AM Ufuk Celebi wrote:
> On Sun, Nov 4, 2018 at 10:34 PM Hao Sun wrote:
> > Thanks that also works. To avoid same issue with zookeeper, I assume I
> have to do the same trick?
>
> Yes, exactly. The following configuration [1] entry takes care of this:
>
>
On Sun, Nov 4, 2018 at 10:34 PM Hao Sun wrote:
> Thanks that also works. To avoid same issue with zookeeper, I assume I have
> to do the same trick?
Yes, exactly. The following configuration [1] entry takes care of this:
high-availability.cluster-id: application-1
This will result in ZooKeeper
Hi Hao Sun,
When you use the Job Cluster mode, you should be sure to isolate the
Zookeeper path for different jobs.
Ufuk is correct. We fixed the JobID for the purpose of finding JobGraph in
failover.
In fact, FLINK-10291 should be combined with FLINK-10292[1].
To till,
I hope FLINK-10292 can be
Thanks that also works. To avoid same issue with zookeeper, I assume I have
to do the same trick?
On Sun, Nov 4, 2018, 03:34 Ufuk Celebi wrote:
> Hey Hao Sun,
>
> this has been changed recently [1] in order to properly support
> failover in job cluster mode.
>
> A workaround for you would be to
Hey Hao Sun,
this has been changed recently [1] in order to properly support
failover in job cluster mode.
A workaround for you would be to add an application identifier to the
checkpoint path of each application, resulting in S3 paths like
application-/00...00/chk-64.
Is that a feasible sol
I am wondering if I can customize job_id for job cluster mode. Currently it
is always . I am running multiple job
clusters and sharing s3, it means checkpoints will be shared by different
jobs as well e.g. /chk-64, how can I avoid
this