You have to make sure that you can ssh between the nodes. Also check the
file hosts in /etc folder. Both the master and the slave much have each
others machines defined in it. Refer to my previous mail
Mithila

On Fri, Apr 17, 2009 at 7:18 PM, jpe30 <[email protected]> wrote:

>
> ok, I have my hosts file setup the way you told me, I changed my
> replication
> factor to 1.  The thing that I don't get is this line from the datanodes...
>
> STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost
>
> If I have my hadoop-site.xml setup correctly, with the correct address it
> should work right?  It seems like the datanodes aren't getting an IP
> address
> to use, and I'm not sure why.
>
>
> jpe30 wrote:
> >
> > That helps a lot actually.  I will try setting up my hosts file tomorrow
> > and make the other changes you suggested.
> >
> > Thanks!
> >
> >
> >
> > Mithila Nagendra wrote:
> >>
> >> Hi,
> >> The replication factor has to be set to 1. Also for you dfs and job
> >> tracker
> >> configuration you should insert the name of the node rather than the i.p
> >> address.
> >>
> >> For instance:
> >>  <value>192.168.1.10:54310</value>
> >>
> >> can be:
> >>
> >>  <value>master:54310</value>
> >>
> >> The nodes can be renamed by renaming them in the hosts files in /etc
> >> folder.
> >> It should look like the following:
> >>
> >> # Do not remove the following line, or various programs
> >> # that require network functionality will fail.
> >> 127.0.0.1       localhost.localdomain   localhost       node01
> >> 192.168.0.1     node01
> >> 192.168.0.2     node02
> >> 192.168.0.3     node03
> >>
> >> Hope this helps
> >> Mithila
> >>
> >> On Wed, Apr 15, 2009 at 9:40 PM, jpe30 <[email protected]> wrote:
> >>
> >>>
> >>> I'm setting up a Hadoop cluster and I have the name node and job
> tracker
> >>> up
> >>> and running.  However, I cannot get any of my datanodes or tasktrackers
> >>> to
> >>> start.  Here is my hadoop-site.xml file...
> >>>
> >>>
> >>>
> >>> <?xml version="1.0"?>
> >>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> >>>
> >>> <!-- Put site-specific property overrides in this file. -->
> >>>
> >>> <configuration>
> >>>
> >>> <property>
> >>>  <name>hadoop.tmp.dir</name>
> >>>  <value>/home/hadoop/h_temp</value>
> >>>  <description>A base for other temporary directories.</description>
> >>> </property>
> >>>
> >>> <property>
> >>>  <name>dfs.data.dir</name>
> >>>  <value>/home/hadoop/data</value>
> >>> </property>
> >>>
> >>> <property>
> >>>  <name>fs.default.name</name>
> >>>   <value>192.168.1.10:54310</value>
> >>>  <description>The name of the default file system.  A URI whose
> >>>   scheme and authority determine the FileSystem implementation.  The
> >>>  uri's scheme determines the config property (fs.SCHEME.impl) naming
> >>>  the FileSystem implementation class.  The uri's authority is used to
> >>>   determine the host, port, etc. for a filesystem.</description>
> >>>  <final>true</final>
> >>> </property>
> >>>
> >>> <property>
> >>>  <name>mapred.job.tracker</name>
> >>>   <value>192.168.1.10:54311</value>
> >>>  <description>The host and port that the MapReduce job tracker runs
> >>>   at.  If "local", then jobs are run in-process as a single map
> >>>  and reduce task.
> >>>   </description>
> >>> </property>
> >>>
> >>> <property>
> >>>  <name>dfs.replication</name>
> >>>  <value>0</value>
> >>>   <description>Default block replication.
> >>>   The actual number of replications can be specified when the file is
> >>> created.
> >>>  The default is used if replication is not specified in create time.
> >>>   </description>
> >>> </property>
> >>>
> >>> </configuration>
> >>>
> >>>
> >>> and here is the error I'm getting...
> >>>
> >>>
> >>>
> >>>
> >>> 2009-04-15 14:00:48,208 INFO org.apache.hadoop.dfs.DataNode:
> >>> STARTUP_MSG:
> >>> /************************************************************
> >>> STARTUP_MSG: Starting DataNode
> >>> STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.18.3
> >>> STARTUP_MSG:   build =
> >>> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r
> >>> 736250;
> >>> compiled by 'ndaley' on Thu Jan 22 23:12:0$
> >>> ************************************************************/
> >>> 2009-04-15 14:00:48,355 ERROR org.apache.hadoop.dfs.DataNode:
> >>> java.net.UnknownHostException: myhost: myhost
> >>>        at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
> >>>        at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:185)
> >>>        at
> >>> org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:249)
> >>>         at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:223)
> >>>         at
> >>> org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:3071)
> >>>        at
> >>> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:3026)
> >>>        at
> >>> org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:3034)
> >>>        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3156)
> >>>
> >>> 2009-04-15 14:00:48,356 INFO org.apache.hadoop.dfs.DataNode:
> >>> SHUTDOWN_MSG:
> >>> /************************************************************
> >>> SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException:
> >>> myhost: myhost
> >>> ************************************************************/
> >>>
> >>> --
> >>> View this message in context:
> >>> http://www.nabble.com/Datanode-Setup-tp23064660p23064660.html
> >>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> >>>
> >>>
> >>
> >>
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Datanode-Setup-tp23064660p23100910.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>

Reply via email to