dud created HADOOP-14411:
----------------------------
Summary: HDFS architecture documentation describes outdated
placement policy
Key: HADOOP-14411
URL: https://issues.apache.org/jira/browse/HADOOP-14411
Project: Hadoop Common
Issue Type: Improvement
Reporter: dud
Priority: Minor
Hello
I've noticed inconsistencies in the default block placement policy between what
one can read on several websites and what is stated in the official
documentation.
After digging, I've found out that the proper default placement policy is :
{code}
* The replica placement strategy is that if the writer is on a datanode,
* the 1st replica is placed on the local machine,
* otherwise a random datanode. The 2nd replica is placed on a datanode
* that is on a different rack. The 3rd replica is placed on a datanode
* which is on a different node of the rack as the second replica.
{code}
[source|https://git-wip-us.apache.org/repos/asf?p=hadoop.git;a=blob;f=hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#l45]
HADOOP-5734 has been opened regarding this mistake several years ago and has
eventually been merged but unfortunately this fix has been overwritten by
[~tucu00] in [SVN
commit|https://svn.apache.org/viewvc?view=revision&revision=1425527].
dud
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]