Hi,
I used a scala library called scallop[1] to parse my job’s arguments. When
the argument didn’t
exist in the config setting, the default behavior of scallop would call
sys.exit(1).
It is not a problem when I’m using flink cli to submit job. However, when I
used rest api to submit
job, it seems
FYI, here is the jira to support timeout in savepoint REST api
https://issues.apache.org/jira/browse/FLINK-10360
On Fri, Nov 2, 2018 at 6:37 PM Gagan Agrawal wrote:
> Great, thanks for sharing that info.
>
> Gagan
>
> On Thu, Nov 1, 2018 at 1:50 PM Yun Tang wrote:
>
>> Haha, actually externaliz
Hi Hao Sun,
When you use the Job Cluster mode, you should be sure to isolate the
Zookeeper path for different jobs.
Ufuk is correct. We fixed the JobID for the purpose of finding JobGraph in
failover.
In fact, FLINK-10291 should be combined with FLINK-10292[1].
To till,
I hope FLINK-10292 can be
I have the following questions regarding savepoint recovery.
- In my job, it takes over 30 minutes to take a savepoint of over 100GB
on 3 TMs. Most time spent after the alignment. I assume it was
serialization and uploading to S3. However, when I resume a new job
from the savepoint, it only
Hi, what is the recommended method for using BucketingSink and compressing
files using GZIP before it is uploaded to S3?
I read that one way is to extend the StreamWriterBase class and wrap the stream
using GZIPOutputStream. Is there an Flink example for this? If so, what would
be the proper wa
Thanks that also works. To avoid same issue with zookeeper, I assume I have
to do the same trick?
On Sun, Nov 4, 2018, 03:34 Ufuk Celebi wrote:
> Hey Hao Sun,
>
> this has been changed recently [1] in order to properly support
> failover in job cluster mode.
>
> A workaround for you would be to
Hey Hao Sun,
this has been changed recently [1] in order to properly support
failover in job cluster mode.
A workaround for you would be to add an application identifier to the
checkpoint path of each application, resulting in S3 paths like
application-/00...00/chk-64.
Is that a feasible sol
Hi Ravi, some questions:
- Is this using Flink 1.6.2 with dependencies (flink-s3-fs-hadoop,
flink-statebackend-rocksdb, hadoop-common, hadoop-aws, hadoop-hdfs,
hadoop-common) ? If so, could you please share your dependency versioning?
- Does this use a kafka source with high flink parallelism (~