[ 
https://issues.apache.org/jira/browse/FLINK-30513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17948135#comment-17948135
 ] 

Zhanghao Chen commented on FLINK-30513:
---------------------------------------

[~xinchen147] We imlemented this feature in our internal Flink version to clean 
up the leaked high availability.storageDir on graceful job cancelling 
(/user/hadoop/.flink/xxxappId willl be cleaned already when using flink 
cancel/stop command), and found leakage still exist, as sometimes one just 
force kill the application via kill -9 (e.g. using the yarn application kill 
command) and the cleanup is not performed. We decided to implement a cleanup 
hook on our job management system which continuously monitors job status. 

> HA storage dir leaks on cluster termination 
> --------------------------------------------
>
>                 Key: FLINK-30513
>                 URL: https://issues.apache.org/jira/browse/FLINK-30513
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Coordination
>    Affects Versions: 1.15.0, 1.16.0, 1.17.0, 1.18.0
>            Reporter: Zhanghao Chen
>            Assignee: Zhanghao Chen
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: image-2022-12-27-21-32-17-510.png
>
>
> *Problem*
> We found that HA storage dir leaks on cluster termination for a Flink job 
> with HA enabled. The following picture shows the HA storage dir (here on 
> HDFS) of the cluster czh-flink-test-offline (of application mode) after 
> canelling the job with flink-cancel. We are left with an empty dir, and too 
> many empty dirs will greatly hurt the stability of HDFS NameNode!
> !image-2022-12-27-21-32-17-510.png|width=582,height=158!
>  
> Furthermore, in case the user choose to retain the checkpoints on job 
> termination, we will have the completedCheckpoints leaked as well. Note that 
> we no longer need the completedCheckpoints files as we'll directly recover 
> retained CPs from the CP data dir.
> *Root Cause*
> When we run AbstractHaServices#closeAndCleanupAllData(), we cleaned up blob 
> store, but didn't clean the HA storage dir.
> *Proposal*
> Clean up the HA storage dir after cleaning up blob store in 
> AbstractHaServices#closeAndCleanupAllData().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to