[ 
https://issues.apache.org/jira/browse/HDFS-17741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua resolved HDFS-17741.
--------------------------------
      Assignee: liuguanghua
    Resolution: Invalid

> The old blocks should be added into MarkedDeleteQueue when create a file with 
> overwrite
> ---------------------------------------------------------------------------------------
>
>                 Key: HDFS-17741
>                 URL: https://issues.apache.org/jira/browse/HDFS-17741
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: liuguanghua
>            Assignee: liuguanghua
>            Priority: Major
>              Labels: pull-request-available
>
> When create a file with overwrite, the old blocks will not delete immediately 
> because these blocks not add into MarkedDeleteQueue.
> This will have two consequences
> (1)The block will delete when datanode block report comes.
>   (2)  If the block is in the PendingReconstructionBlocks, it will  be 
> timeouted after 300s.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to