ZanderXu created HDFS-16593:
-------------------------------
Summary: Correct inaccurate BlocksRemoved metric on DataNode side
Key: HDFS-16593
URL: https://issues.apache.org/jira/browse/HDFS-16593
Project: Hadoop HDFS
Issue Type: Bug
Reporter: ZanderXu
Assignee: ZanderXu
When tracing the root cause of production issue, I found that the BlocksRemoved
metric on Datanode size was inaccurate.
{code:java}
case DatanodeProtocol.DNA_INVALIDATE:
//
// Some local block(s) are obsolete and can be
// safely garbage-collected.
//
Block toDelete[] = bcmd.getBlocks();
try {
// using global fsdataset
dn.getFSDataset().invalidate(bcmd.getBlockPoolId(), toDelete);
} catch(IOException e) {
// Exceptions caught here are not expected to be disk-related.
throw e;
}
dn.metrics.incrBlocksRemoved(toDelete.length);
break;
{code}
Because even if the invalidate method throws an exception, some blocks may have
been successfully deleted internally.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]