[ 
https://issues.apache.org/jira/browse/HADOOP-10180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved HADOOP-10180.
---------------------------------------

    Resolution: Invalid

JIRA is for developers to track work items, and specific bug reports can be 
filed. For general user questions and help with using hadoop, please use user@ 
mailing list. 

> Getting some error while increasing the hadoop cluster size. 
> -------------------------------------------------------------
>
>                 Key: HADOOP-10180
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10180
>             Project: Hadoop Common
>          Issue Type: Task
>            Reporter: Ravi Hemnani
>            Priority: Trivial
>
> We have a 5-node hadoop cluster and we are trying to increase the size of the 
> cluster. We have added 2 new disks to each of the 5-boxes and we followed all 
> the steps of putting the disk to the hadoop cluster. Everything works fine 
> except when we restart a datanode, there are errors multiple times in the log 
> file. Following is the error which appears in the log files, 
> 2013-12-23 14:32:19,406 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DatanodeRegistration(172.16.200.128:50010, 
> storageID=DS-1937554000-172.16.200.128-50010-1376068931321, infoPort=50075, 
> ipcPort=50020):DataXceiver
> org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block 
> blk_-8997395530627676954_276834 is valid, and cannot be written to.
>       at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:1428)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:114)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:302)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
>       at java.lang.Thread.run(Thread.java:724)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to