[ 
https://issues.apache.org/jira/browse/HDFS-1226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-1226.
-------------------------------

    Resolution: Duplicate

> Last block is temporary unavailable for readers because of crashed appender
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-1226
>                 URL: https://issues.apache.org/jira/browse/HDFS-1226
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.20-append
>            Reporter: Thanh Do
>
> - Summary: the last block is unavailable to subsequent readers if appender 
> crashes in the
> middle of appending workload.
>  
> - Setup:
> + # available datanodes = 3
> + # disks / datanode = 1
> + # failures = 1
> + failure type = crash
> + When/where failure happens = (see below)
>  
> - Details:
> Say a client appending to block X at 3 datanodes: dn1, dn2 and dn3. After 
> successful 
> recoverBlock at primary datanode, client calls createOutputStream, which make 
> all datanodes
> move the block file and the meta file from current directory to tmp 
> directory. Now suppose
> the client crashes. Since all replicas of block X are in tmp folders of 
> corresponding datanode,
> subsequent readers cannot read block X.
> This bug was found by our Failure Testing Service framework:
> http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-98.html
> For questions, please email us: Thanh Do (than...@cs.wisc.edu) and 
> Haryadi Gunawi (hary...@eecs.berkeley.edu)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to