In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific
datanodes when locating the next block.
-------------------------------------------------------------------------------------------------------------------
Key: HDFS-630
URL: https://issues.apache.org/jira/browse/HDFS-630
Project: Hadoop HDFS
Issue Type: New Feature
Components: hdfs client
Affects Versions: 0.20.1, 0.21.0
Reporter: Ruyue Ma
Assignee: Ruyue Ma
Priority: Minor
Fix For: 0.21.0
created from hdfs-200.
If during a write, the dfsclient sees that a block replica location for a newly
allocated block is not-connectable, it re-requests the NN to get a fresh set of
replica locations of the block. It tries this dfs.client.block.write.retries
times (default 3), sleeping 6 seconds between each retry ( see
DFSClient.nextBlockOutputStream).
This setting works well when you have a reasonable size cluster; if u have few
datanodes in the cluster, every retry maybe pick the dead-datanode and the
above logic bails out.
Our solution: when getting block location from namenode, we give nn the
excluded datanodes. The list of dead datanodes is only for one block allocation.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.