Can you please check the following document and verify whether you have
enough network bandwidth to support 30 seconds check point interval worth
of the streaming data?
https://data-artisans.com/blog/how-to-size-your-apache-flink-cluster-general-guidelines

Regards
Bhaskar

On Wed, Sep 19, 2018 at 12:21 PM yuvraj singh <19yuvrajsing...@gmail.com>
wrote:

> log :: Checkpoint 58 of job 0efaa0e6db5c38bec81dfefb159402c0 expired
> before completing.
> I have a use case where i need to do the checkpointing frequently .
>
> i am using Kafka to read stream and making a window of 1 hour ,  which is
> having 50gb data always  and it can be more than that .
>
> i have seen there is no back pressure .
>
> Thanks
> Yubraj Singh
>
>
>
> On Wed, Sep 19, 2018 at 12:07 PM Jörn Franke <jornfra...@gmail.com> wrote:
>
>> What do the logfiles say?
>>
>> How does the source code looks like?
>>
>> Is it really needed to do checkpointing every 30 seconds?
>>
>> On 19. Sep 2018, at 08:25, yuvraj singh <19yuvrajsing...@gmail.com>
>> wrote:
>>
>> Hi ,
>>
>> I am doing checkpointing using s3 and rocksdb ,
>> i am doing checkpointing per 30 seconds and time out is 10 seconds .
>>
>> most of the time its failing by saying Failure Time: 11:53:17Cause:
>> Checkpoint expired before completing .
>> I  increases the timeout  as well still it not working for me .
>>
>> please suggest .
>>
>> Thanks
>> Yubraj Singh
>>
>>

Reply via email to