[
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack reopened HDFS-630:
------------------------
Reopening so can submit improved patch.
> In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific
> datanodes when locating the next block.
> -------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-630
> URL: https://issues.apache.org/jira/browse/HDFS-630
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: hdfs client
> Affects Versions: 0.21.0
> Reporter: Ruyue Ma
> Assignee: Cosmin Lehene
> Priority: Minor
> Attachments: 0001-Fix-HDFS-630-0.21-svn.patch,
> 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch,
> 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch,
> 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch,
> 0001-Fix-HDFS-630-trunk-svn-2.patch, HDFS-630.patch
>
>
> created from hdfs-200.
> If during a write, the dfsclient sees that a block replica location for a
> newly allocated block is not-connectable, it re-requests the NN to get a
> fresh set of replica locations of the block. It tries this
> dfs.client.block.write.retries times (default 3), sleeping 6 seconds between
> each retry ( see DFSClient.nextBlockOutputStream).
> This setting works well when you have a reasonable size cluster; if u have
> few datanodes in the cluster, every retry maybe pick the dead-datanode and
> the above logic bails out.
> Our solution: when getting block location from namenode, we give nn the
> excluded datanodes. The list of dead datanodes is only for one block
> allocation.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.