Re: hdfs lease issues on flink retry

2021-09-24 Thread David Morávek
reated by DataSink task > on retry require clean up or flink internally would take care of clean up > part? > > > > *From:* David Morávek > > *Sent:* Monday, September 20, 2021 5:12 PM > *To:* Shah, Siddharth [Engineering] > *Cc:* Hailu, Andreas [Engineering] ; > M

RE: hdfs lease issues on flink retry

2021-09-24 Thread Shah, Siddharth
llu, Rajiv [Engineering] Subject: RE: hdfs lease issues on flink retry Hi David/Matthias, Thank you for your suggestion, it seems to be working fine. Had a quick question – would these _temporary directories created by DataSink task on retry require clean up or flink internally would take care of cle

RE: hdfs lease issues on flink retry

2021-09-23 Thread Shah, Siddharth
, 2021 5:12 PM To: Shah, Siddharth [Engineering] Cc: Hailu, Andreas [Engineering] ; Matthias Pohl ; user@flink.apache.org Subject: Re: hdfs lease issues on flink retry Hi, AttemptId needs to be an integer (take a look at TaskAttemptID class for more details). As for your prior question, any random id

Re: hdfs lease issues on flink retry

2021-09-20 Thread David Morávek
tthias Pohl > *Sent:* Monday, September 20, 2021 4:54 AM > *To:* Shah, Siddharth [Engineering] > *Cc:* user@flink.apache.org; Hailu, Andreas [Engineering] < > andreas.ha...@ny.email.gs.com> > *Subject:* Re: hdfs lease issues on flink retry > > > > I don't know o

RE: hdfs lease issues on flink retry

2021-09-20 Thread Shah, Siddharth
[Engineering] Subject: Re: hdfs lease issues on flink retry I don't know of any side effects of your approach. But another workaround I saw was replacing the _0 suffix by something like "_" + System.currentMillis() On Fri, Sep 17, 2021 at 8:38 PM Shah, Siddharth mailto:siddhart

Re: hdfs lease issues on flink retry

2021-09-20 Thread Matthias Pohl
I have tested on a handful of our jobs and > seems to be working fine. Just wanted to check any downside of this > changes that I may not be aware of? > > > > Thanks, > > Siddharth > > > > > > > > *From:* Matthias Pohl > *Sent:* Tuesday

RE: hdfs lease issues on flink retry

2021-09-17 Thread Shah, Siddharth
d attempt__0123_r_0001_0 instead of attempt___r_0001_0. I have tested on a handful of our jobs and seems to be working fine. Just wanted to check any downside of this changes that I may not be aware of? Thanks, Siddharth From: Matthias Pohl Sent: Tuesday, September 07, 2021 5:06 AM T

Re: hdfs lease issues on flink retry

2021-09-07 Thread Matthias Pohl
s. >>> >>> >>> >>> >>> >>> *From:* Matthias Pohl >>> *Sent:* Thursday, August 26, 2021 9:47 AM >>> *To:* Shah, Siddharth [Engineering] >>> *Cc:* user@flink.apache.org; Hailu, Andreas [Engineering] < >>> a

Re: hdfs lease issues on flink retry

2021-08-26 Thread Matthias Pohl
he files. >> >> >> >> >> >> *From:* Matthias Pohl >> *Sent:* Thursday, August 26, 2021 9:47 AM >> *To:* Shah, Siddharth [Engineering] >> *Cc:* user@flink.apache.org; Hailu, Andreas [Engineering] < >> andreas.ha...@ny.email.gs.com>

Re: hdfs lease issues on flink retry

2021-08-26 Thread Matthias Pohl
r you. Please let us know if > you’re not able to see the files. > > > > > > *From:* Matthias Pohl > *Sent:* Thursday, August 26, 2021 9:47 AM > *To:* Shah, Siddharth [Engineering] > *Cc:* user@flink.apache.org; Hailu, Andreas [Engineering] < > andreas.ha...@ny

RE: hdfs lease issues on flink retry

2021-08-26 Thread Shah, Siddharth [Engineering]
. From: Matthias Pohl Sent: Thursday, August 26, 2021 9:47 AM To: Shah, Siddharth [Engineering] Cc: user@flink.apache.org; Hailu, Andreas [Engineering] Subject: Re: hdfs lease issues on flink retry Hi Siddharth, thanks for reaching out to the community. This might be a bug. Could you share your

Re: hdfs lease issues on flink retry

2021-08-26 Thread Matthias Pohl
Hi Siddharth, thanks for reaching out to the community. This might be a bug. Could you share your Flink and YARN logs? This way we could get a better understanding of what's going on. Best, Matthias On Tue, Aug 24, 2021 at 10:19 PM Shah, Siddharth [Engineering] < siddharth.x.s...@gs.com> wrote: