Thanks~

but I still do not know how to deal with the issue.

I can find the dead of datanode daemon, however I can not restart it because
the log file  also shows there are something wrong in RPC (right?)


On Mon, Jun 1, 2009 at 6:20 PM, HRoger <hanxianyongro...@163.com> wrote:
>
> Hi!,all the steps should be done in the namenode.
> you can execute the "-report" twice one before the "-refreshNodes" and one
> later then compare the result!
>
> jonhson.ian wrote:
>>
>> On Mon, Jun 1, 2009 at 12:35 AM, HRoger <hanxianyongro...@163.com> wrote:
>>>
>>> You should do thart in the right way as the follow steps:
>>> 1.create a new file named as excludes under $HADOOP_HOME with the
>>> datanode
>>> hostname(IP) in it by one name every line.
>>> 2.edit the hadoop-site.xml by adding
>>> <property>
>>>   <name>dfs.hosts.exclude</name>
>>>   <value>excludes</ value>
>>> </property>
>>> and save it.
>>> 3.execute the command "bin/hadoop dfsadmin -refreshNodes" in the namenode
>>> host.
>>> 4.when the step 3 finished,you can run "bin/hadoop dfsadmin -report and
>>> check the result.
>>>
>>
>>
>> I executed above steps all in Namenode and I got following message
>> (without restart hadoop):
>>
>> ----------------- dump of screeen -----------------------
>>
>> $ bin/hadoop dfsadmin -refreshNodes
>> [had...@hdt0 hadoop-0.19.1]$ bin/hadoop dfsadmin -report
>> Safe mode is ON
>> Configured Capacity: 152863682560 (142.37 GB)
>> Present Capacity: 84421242880 (78.62 GB)
>> DFS Remaining: 84370862080 (78.58 GB)
>> DFS Used: 50380800 (48.05 MB)
>> DFS Used%: 0.06%
>>
>> -------------------------------------------------
>> Datanodes available: 1 (3 total, 2 dead)
>>
>> Name: 10.61.0.5:50010
>> Decommission Status : Decommission in progress
>> Configured Capacity: 152863682560 (142.37 GB)
>> DFS Used: 50380800 (48.05 MB)
>> Non DFS Used: 68442439680 (63.74 GB)
>> DFS Remaining: 84370862080(78.58 GB)
>> DFS Used%: 0.03%
>> DFS Remaining%: 55.19%
>> Last contact: Mon Jun 01 17:32:59 CST 2009
>>
>>
>> Name: 10.61.0.7
>> Decommission Status : Normal
>> Configured Capacity: 0 (0 KB)
>> DFS Used: 0 (0 KB)
>> Non DFS Used: 0 (0 KB)
>> DFS Remaining: 0(0 KB)
>> DFS Used%: 100%
>> DFS Remaining%: 0%
>> Last contact: Thu Jan 01 08:00:00 CST 1970
>>
>>
>> Name: 10.61.0.143
>> Decommission Status : Normal
>> Configured Capacity: 0 (0 KB)
>> DFS Used: 0 (0 KB)
>> Non DFS Used: 0 (0 KB)
>> DFS Remaining: 0(0 KB)
>> DFS Used%: 100%
>> DFS Remaining%: 0%
>> Last contact: Thu Jan 01 08:00:00 CST 1970
>>
>> -----------------------------------------------------------------
>>
>> two nodes has been dead... hmm...  what happen?
>> and any help?
>>
>>
>> Thanks again,
>>
>> Ian
>>
>>
>
> --
> View this message in context: 
> http://www.nabble.com/DataNode-not-started-up-and-%22org.apache.hadoop.ipc.RemoteException%22--is-thrown-out-tp23791017p23812616.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>

Reply via email to