[
https://issues.apache.org/jira/browse/HDFS-16316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hui Fei resolved HDFS-16316.
----------------------------
Fix Version/s: 3.4.0
Resolution: Fixed
> Improve DirectoryScanner: add regular file check related block
> --------------------------------------------------------------
>
> Key: HDFS-16316
> URL: https://issues.apache.org/jira/browse/HDFS-16316
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.9.2
> Reporter: JiangHua Zhu
> Assignee: JiangHua Zhu
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png,
> screenshot-4.png
>
> Time Spent: 6h 20m
> Remaining Estimate: 0h
>
> Something unusual happened in the online environment.
> The DataNode is configured with 11 disks (${dfs.datanode.data.dir}). It is
> normal for 10 disks to calculate the used capacity, and the calculated value
> for the other 1 disk is much larger, which is very strange.
> This is about the live view on the NameNode:
> !screenshot-1.png!
> This is about the live view on the DataNode:
> !screenshot-2.png!
> We can look at the view on linux:
> !screenshot-3.png!
> There is a big gap here, regarding'/mnt/dfs/11/data'. This situation should
> be prohibited from happening.
> I found that there are some abnormal block files.
> There are wrong blk_xxxx.meta in some subdir directories, causing abnormal
> computing space.
> Here are some abnormal block files:
> !screenshot-4.png!
> Such files should not be used as normal blocks. They should be actively
> identified and filtered, which is good for cluster stability.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]