Hi Ashish!

Try for the following things::

-> Check the config file(hadoop-site.xml) of namenode.
-> Make sure, the tag(dfs.datanode.addres)'s value you have given correctly
it's IP,and the name of that machine.
-> Also, check for the name added in /etc/hosts file.
-> Check for the ssh keys of datanodes present in namenode's known_hosts
file
-> check for the value of dfs.datanode.addres on datanode's config file.



On Tue, Jun 16, 2009 at 10:58 AM, ashish pareek <pareek...@gmail.com> wrote:

> HI ,
>     I am trying to step up a hadoop cluster on 3GB machine and using hadoop
> 0.18.3 and  have followed procedure given in  apache hadoop site for hadoop
> cluster.
>     In conf/slaves I have added two datanode i.e including the namenode
> vitrual machine and other machine virtual machine (datanode)  ..... and
> have
> set up passwordless ssh between both virtual machines ..... But now problem
> is when I run command :
>
> bin/hadoop start-all.sh
>
> It start only one datanode on the same namenode vitrual machine but it
> doesn't start the datanode on other machine.....
>
> in logs/hadoop-datanode.log  i get message
>
>
>  INFO org.apache.hadoop.ipc.Client: Retrying
>  connect to server: hadoop1/192.168.1.28:9000. Already
>
>  tried 1 time(s).
>
>  2009-05-09 18:35:14,266 INFO org.apache.hadoop.ipc.Client: Retrying
>  connect to server: hadoop1/192.168.1.28:9000. Already tried 2 time(s).
>
>  2009-05-09 18:35:14,266 INFO org.apache.hadoop.ipc.Client: Retrying
>  connect to server: hadoop1/192.168.1.28:9000. Already tried 3 time(s).
>
>
> .
> .
> .
> .
> .
> .
> .
> .
> .
>
> .
> .
>
> .
>
> I have tried formatting and start the cluster again .....but still I
> get the same error.
>
> So can any one help in solving this problem. :)
>
> Thanks
>
> Regards
>
> Ashish Pareek
>



-- 
Regards!
Sugandha

Reply via email to