I actually picked up the alpha .PDF's of your book, great job. I'm following the example in chapter 7 to the letter now and am still getting the same problem. 2 quick questions (and thanks for your time in advance)...
Is the ClusterMapReduceDelegate class available anywhere yet? Adding ~/hadoop/libs/*.jar in it's entirety to my pom.xml is a lot of bulk, so I've avoided it until now. Are there any lib's in there that are absolutely necessary for this test to work? Thanks again, bc jason hadoop wrote: > > I have a nice variant of this in the ch7 examples section of my book, > including a standalone wrapper around the virtual cluster for allowing > multiple test instances to share the virtual cluster - and allow an easier > time to poke around with the input and output datasets. > > It even works decently under windows - my editor insisting on word to > recent > for crossover. > > On Mon, Apr 13, 2009 at 9:16 AM, czero <[email protected]> wrote: > >> >> Sry, I forgot to include the not-IntelliJ-console output :) >> >> 09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed >> java.lang.NullPointerException >> at java.io.File.<init>(File.java:222) >> at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143) >> at >> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110) >> at >> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143) >> at >> >> org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96) >> at java.lang.Thread.run(Thread.java:637) >> >> I managed to pick up the chapter in the Hadoop Book that Jason mentions >> that >> deals with Unit testing (great chapter btw) and it looks like everything >> is >> in order. He points out that this error is typically caused by a bad >> hadoop.log.dir or missing log4j.properties, but I verified that my dir is >> ok >> and my hadoop-0.19.1-core.jar has the log4j.properties in it. >> >> I also tried running the same test with hadoop-core/test 0.19.0 - same >> thing. >> >> Thanks again, >> >> bc >> >> >> czero wrote: >> > >> > Hey all, >> > >> > I'm also extending the ClusterMapReduceTestCase and having a bit of >> > trouble as well. >> > >> > Currently I'm getting : >> > >> > Starting DataNode 0 with dfs.data.dir: >> > build/test/data/dfs/data/data1,build/test/data/dfs/data/data2 >> > Starting DataNode 1 with dfs.data.dir: >> > build/test/data/dfs/data/data3,build/test/data/dfs/data/data4 >> > Generating rack names for tasktrackers >> > Generating host names for tasktrackers >> > >> > And then nothing... just spins on that forever. Any ideas? >> > >> > I have all the jetty and jetty-ext libs in the classpath and I set the >> > hadoop.log.dir and the SAX parser correctly. >> > >> > This is all I have for my test class so far, I'm not even doing >> anything >> > yet: >> > >> > public class TestDoop extends ClusterMapReduceTestCase { >> > >> > @Test >> > public void testDoop() throws Exception { >> > System.setProperty("hadoop.log.dir", "~/test-logs"); >> > System.setProperty("javax.xml.parsers.SAXParserFactory", >> > "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl"); >> > >> > setUp(); >> > >> > System.out.println("done."); >> > } >> > >> > Thanks! >> > >> > bc >> > >> >> -- >> View this message in context: >> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html >> Sent from the Hadoop core-user mailing list archive at Nabble.com. >> >> > > > -- > Alpha Chapters of my book on Hadoop are available > http://www.apress.com/book/view/9781430219422 > > -- View this message in context: http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23041470.html Sent from the Hadoop core-user mailing list archive at Nabble.com.
