Re: custom writer fail to recover

2018-01-04 Thread Aljoscha Krettek
Hi, Which version of Flink is this? It cannot recover because it expects more data to have been written than is there, which seems to indicate that flushing did not work correctly. Best, Aljoscha > On 19. Dec 2017, at 00:40, xiatao123 wrote: > > Hi Das, > Have you got your .pending issue re

Re: custom writer fail to recover

2017-12-18 Thread xiatao123
Hi Das, Have you got your .pending issue resolved? I am running into the same issue where the parquet files are all in pending status. Please help to share your solutions. Thanks, Tao -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: custom writer fail to recover

2017-08-24 Thread Biswajit Das
Hi Stefan , My bad , I'm really sorry. I have copied wrong exception stack , during the recovery after error I'm seeing below exception Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.HadoopIllegalArgumentException): Cannot truncate to a larger file size. Current size: 3113238

Re: custom writer fail to recover

2017-08-24 Thread Stefan Richter
Hi, I think there are two different things mixed up in your analysis. The stack trace that you provided is caused by a failing checkpoint - in writing, not in reading. It seems to fail from a Timeout of your HDFS connection. This close method has also nothing to do with the close method in the

custom writer fail to recover

2017-08-23 Thread Biswajit Das
Hi There , I'm using custom writer with hourly Rolling Bucket sink . I'm seeing two issue first one if write the same file on s3 all the files gets committed , however when I write the same on HDFS I see its remains on .pending state , could be related to second problem below Second issue : My c