Thanks for replying it was a dumb mistake I had 20.1 on the namenode and
20.2 on the slaves - problem solved

Thanks again for replying! Cheers!

-----Original Message-----
From: Thomas Graves [mailto:tgra...@yahoo-inc.com] 
Sent: Tuesday, June 14, 2011 4:30 PM
To: common-dev@hadoop.apache.org; Schmitz, Jeff GSUSI-PTT/TBIM
Subject: Re: Noob question

It looks like it thinks /usr/local/hadoop-0.20.1/  is $HADOOP_HOME. Did
you
install hadoop on all the slave boxes in same location as the box you
have
working?  I'm assuming you are using the start-all.sh scripts. That
script
goes to each slave box and tries to cd to $HADOOP_HOME and runs the
start
commands from there.

Tom


On 6/14/11 2:09 PM, "jeff.schm...@shell.com" <jeff.schm...@shell.com>
wrote:

> Hello there!  So I was running in Pseudo-distributed configuration and
> everything was working fine - So now I have some more nodes and am
> trying to run fully distributed I followed the docs and added the
slaves
> file...
> 
> Setup passphraseless ssh ...........
> 
>  
> 
> What am I missing getting this error at start up
> 
>  
> 
> 
>  
> 
>  
> 
> Cheers - 
> 
>  
> 
> Jeffery Schmitz
> Projects and Technology
> 3737 Bellaire Blvd Houston, Texas 77001
> Tel: +1-713-245-7326 Fax: +1 713 245 7678
> Email: jeff.schm...@shell.com <mailto:jeff.schm...@shell.com>
> 
> "TK-421, why aren't you at your post?"
> 
>  
> 
>  
> 



Reply via email to