Gang Xie created HDFS-12820:
-------------------------------
Summary: Decommissioned datanode is counted in service cause
datanode allcating failure
Key: HDFS-12820
URL: https://issues.apache.org/jira/browse/HDFS-12820
Project: Hadoop HDFS
Issue Type: Bug
Components: block placement
Affects Versions: 2.4.0
Reporter: Gang Xie
When allocate a datanode when dfsclient write with considering the load, it
checks if the datanode is overloaded by calculating the average xceivers of all
the in service datanode. But if the datanode is decommissioned and become dead,
it's still treated as in service, which make the average load much more than
the real one especially when the number of the decommissioned datanode is
great. In our cluster, 180 datanode, and 100 of them decommissioned, and the
average load is 17. This failed all the datanode allocation.
private void subtract(final DatanodeDescriptor node) {
capacityUsed -= node.getDfsUsed();
blockPoolUsed -= node.getBlockPoolUsed();
xceiverCount -= node.getXceiverCount();
{color:red} if (!(node.isDecommissionInProgress() ||
node.isDecommissioned())) {{color}
nodesInService--;
nodesInServiceXceiverCount -= node.getXceiverCount();
capacityTotal -= node.getCapacity();
capacityRemaining -= node.getRemaining();
} else {
capacityTotal -= node.getDfsUsed();
}
cacheCapacity -= node.getCacheCapacity();
cacheUsed -= node.getCacheUsed();
}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]