Manoj Govindassamy created HDFS-11340:
-----------------------------------------

             Summary: DataNode reconfigure for disks doesn't remove the failed 
volumes
                 Key: HDFS-11340
                 URL: https://issues.apache.org/jira/browse/HDFS-11340
             Project: Hadoop HDFS
          Issue Type: Bug
    Affects Versions: 3.0.0-alpha1
            Reporter: Manoj Govindassamy
            Assignee: Manoj Govindassamy



Say a DataNode (uuid:xyz) has disks D1 and D2. When D1 turns bad, JMX query on 
FSDatasetState-xyz for "NumFailedVolumes" attr rightly shows the failed volume 
count as 1 and the "FailedStorageLocations" attr has the failed storage 
location as "D1".

It is possible to add or remove disks to this DataNode by running 
{{reconfigure}} command. Let the failed disk D1 be removed from the conf and 
the new conf has only one good disk D2. Upon running the reconfigure command 
for this DataNode with this new disk conf, the expectation is DataNode would no 
more have "NumFailedVolumes" or "FailedStorageLocations". But, even after 
removing the failed disk from the conf and a successful reconfigure, DataNode 
continues to show the "NumFailedVolumes" as 1 and "FailedStorageLocations" as 
"D1" and it never gets reset. 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to