Thanks Uma!

Another question:

When I run "ant test," it seems like the first test of TestDFSShell throws an 
NPE and then the rest of them say they can't get a lock. Does anyone know why 
this is happening? Even if I comment out the first test, then the second one 
just throws the NPE, so it doesn't seem like an error with any specific test.

2012-01-29 21:22:30,642 INFO  common.Storage (Storage.java:lock(601)) - Cannot 
lock storage /home/ben/Scripts/hadoop/hadoop/build/test/data/dfs/name1. The 
directory is already locked.

----

Testcase: testRecrusiveRm took 4.403 sec
Caused an ERROR
null
java.lang.NullPointerException
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:422)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:280)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:124)
at org.apache.hadoop.hdfs.TestDFSShell.testRecrusiveRm(TestDFSShell.java:137)

Testcase: testDu took 0.172 sec
Caused an ERROR
Cannot lock storage /home/ben/Scripts/hadoop/hadoop/build/test/data/dfs/name1. 
The directory is already locked.
java.io.IOException: Cannot lock storage 
/home/ben/Scripts/hadoop/hadoop/build/test/data/dfs/name1. The directory is 
already locked.
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:602)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1219)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1237)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1164)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:184)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:267)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:124)
at org.apache.hadoop.hdfs.TestDFSShell.testDu(TestDFSShell.java:162)



----- Original Message -----
From: Uma Maheswara Rao G <mahesw...@huawei.com>
To: "common-dev@hadoop.apache.org" <common-dev@hadoop.apache.org>; Ben West 
<bwsithspaw...@yahoo.com>
Cc: 
Sent: Sunday, January 29, 2012 6:41 PM
Subject: RE: Unit tests


________________________________________
From: Ben West [bwsithspaw...@yahoo.com]
Sent: Sunday, January 29, 2012 11:11 PM
To: common-dev@hadoop.apache.org
Subject: Unit tests

Hello,

I'm trying to write a unit test for my patch at 
https://issues.apache.org/jira/browse/HADOOP-7943 and I have a few questions 
based on reading the wiki (http://wiki.apache.org/hadoop/HowToDevelopUnitTests)

1. The patch is for v1.0, not trunk. Should I be using the "junit.framework" 
style tests or the "org.junit" stuff? (I think this might correspond to JUnit 
v3 vs. v4? It seems like trunk has switched to this new style, but not 1.0)
[Uma Ans] V4 style tests recommended.


2. The wiki says "Avoid starting servers (including Jetty and 
Mini{DFS|MR}Clusters) in unit tests... Try to use one of the lighter weight 
test doubles." How do I do this? I want to test copying a file from HDFS to a 
local file system; is it possible to run this test without starting a mini 
cluster?
[Uma Ans]  If you have some thing specific to DFS to test which require server, 
then you must start MiniDFSCluster.  Looks your test requires to test DFS 
permissions. If you are testing local filesystem behaviour then you need not 
start cluster. you can do operations from local to local.


3. Are there any existing tests of the FsShell that I could piggy back off of? 
I see trunk has some tests relating to the shell, but none testing full-out 
functionality like copying a file.
[Uma Ans] You should be able to find it from TestDFSShell.java

Also, adding the answers to #1 and #2 to the wiki would be helpful.


Thanks!
-Ben

Reply via email to