Hi Philip,

Tried the script.  It seems that the script could start a cluster but the web 
page did not work.  Got the following error from the web interface:

HTTP ERROR: 404
/dfshealth.jsp
RequestURI=/dfshealth.jsp
Powered by Jetty://

Thanks,
Nicholas



----- Original Message ----
> From: Philip Zeyliger <phi...@cloudera.com>
> To: hdfs-...@hadoop.apache.org
> Cc: common-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> Sent: Monday, August 10, 2009 6:32:40 PM
> Subject: Re: Question: how to run hadoop after the project split?
> 
> FWIW, I've been using the following simple shell script:
> 
> [0]doorstop:hadoop(149128)$cat damnit.sh
> #!/bin/bash
> 
> set -o errexit
> set -x
> 
> cd hadoop-common
> ant binary
> cd ..
> cd hadoop-hdfs
> ant binary
> cd ..
> cd hadoop-mapreduce
> ant binary
> cd ..
> 
> mkdir -p all/bin all/lib all/contrib
> cp hadoop-common/bin/* all/bin
> cp **/build/*.jar all/lib || true
> cp **/build/*-dev/lib/* all/lib || true
> cp **/build/*-dev/contrib/**/*.jar all/contrib
> 
> It may very well make sense to have a meta-ant target that aggregates these
> things together in a sensible way.
> 
> -- Philip
> 
> On Mon, Aug 10, 2009 at 6:24 PM, Jay Booth wrote:
> 
> > Yeah, I'm hitting the same issues, the patch problems weren't really an
> > issue (same line for same line conflict on my checkout), but not having the
> > webapp's sort of a pain.
> >
> > Looks like ant bin-package puts the webapps dir in
> > HDFS_HOME/build/hadoop-hdfs-0.21.0-dev/webapps, while the daemon's
> > expecting
> > build/webapps/hdfs.  Anyone know off the top of their heads where this is
> > specified, or have a recommended solution?  Otherwise I can hack away.
> >
> > On Mon, Aug 10, 2009 at 8:59 PM, Tsz Wo (Nicholas), Sze <
> > s29752-hadoop...@yahoo.com> wrote:
> >
> > > Hi Todd,
> > >
> > > Two problems:
> > > - The patch in HADOOP-6152 cannot be applied.
> > >
> > > - I have tried an approach similar to the one described by the slides but
> > > it did not work since jetty cannot find the webapps directory.  See
> > below:
> > > 2009-08-10 17:54:41,671 WARN org.mortbay.log: Web application not found
> > > file:/D:/@sze/hadoop/common/c2/build/webapps/hdfs
> > > 2009-08-10 17:54:41,671 WARN org.mortbay.log: Failed startup of context
> > > org.mortbay.jetty.webapp.webappcont...@1884a40
> > > {/,file:/D:/@sze/hadoop/common/c2/build/webapps/hdfs}
> > > java.io.FileNotFoundException:
> > > file:/D:/@sze/hadoop/common/c2/build/webapps/hdfs
> > >    at
> > >
> > org.mortbay.jetty.webapp.WebAppContext.resolveWebApp(WebAppContext.java:959)
> > >    at
> > > org.mortbay.jetty.webapp.WebAppContext.getWebInf(WebAppContext.java:793)
> > >    at
> > >
> > 
> org.mortbay.jetty.webapp.WebInfConfiguration.configureClassLoader(WebInfConfiguration.java:62)
> > >    at
> > > org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:456)
> > >    at
> > > org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> > >    at
> > >
> > 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> > >    at
> > >
> > 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
> > >    at
> > > org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> > >    at
> > > org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
> > >    at org.mortbay.jetty.Server.doStart(Server.java:222)
> > >    at
> > > org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> > >    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:464)
> > >    at
> > >
> > 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:362)
> > >    at
> > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.activate(NameNode.java:309)
> > >    at
> > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:300)
> > >    at
> > > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:405)
> > >    at
> > > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:399)
> > >    at
> > >
> > 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1165)
> > >    at
> > > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1174)
> > >
> > > Thanks,
> > > Nicholas
> > >
> > >
> > >
> > >
> > > ----- Original Message ----
> > > > From: Todd Lipcon 
> > > > To: common-dev@hadoop.apache.org
> > > > Cc: hdfs-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> > > > Sent: Monday, August 10, 2009 5:30:52 PM
> > > > Subject: Re: Question: how to run hadoop after the project split?
> > > >
> > > > Hey Nicholas,
> > > >
> > > > Aaron gave a presentation with his best guess at the HUG last month.
> > His
> > > > slides are here:
> > > http://www.cloudera.com/blog/2009/07/17/the-project-split/
> > > > (starting at slide 16)
> > > > (I'd let him reply himself, but he's out of the office this afternoon
> > ;-)
> > > )
> > > >
> > > > Hopefully we'll get towards something better soon :-/
> > > >
> > > > -Todd
> > > >
> > > > On Mon, Aug 10, 2009 at 5:25 PM, Tsz Wo (Nicholas), Sze <
> > > > s29752-hadoop...@yahoo.com> wrote:
> > > >
> > > > > I have to admit that I don't know the official answer.  The hack
> > below
> > > > > seems working:
> > > > > - compile all 3 sub-projects;
> > > > > - copy everything in hdfs/build and mapreduce/build to common/build;
> > > > > - then run hadoop by the scripts in common/bin as before.
> > > > >
> > > > > Any better idea?
> > > > >
> > > > > Nicholas Sze
> > > > >
> > > > >
> > >
> > >
> > >
> >

Reply via email to