See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1214/
################################################################################### ########################## LAST 60 LINES OF THE CONSOLE ########################### [...truncated 11478 lines...] [exec] 2012-11-02 12:51:17,207 INFO datanode.DataNode (BPServiceActor.java:register(618)) - Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 beginning handshake with NN [exec] 2012-11-02 12:51:17,209 INFO hdfs.StateChange (DatanodeManager.java:registerDatanode(661)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1, storageID=DS-743789385-67.195.138.27-45299-1351860677132, infoPort=44178, ipcPort=53374, storageInfo=lv=-40;cid=testClusterID;nsid=71175640;c=0) storage DS-743789385-67.195.138.27-45299-1351860677132 [exec] 2012-11-02 12:51:17,212 INFO net.NetworkTopology (NetworkTopology.java:add(388)) - Adding a new node: /default-rack/127.0.0.1:45299 [exec] 2012-11-02 12:51:17,213 INFO datanode.DataNode (BPServiceActor.java:register(631)) - Block pool Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 successfully registered with NN [exec] 2012-11-02 12:51:17,213 INFO datanode.DataNode (BPServiceActor.java:offerService(499)) - For namenode localhost/127.0.0.1:37009 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000 [exec] 2012-11-02 12:51:17,217 INFO datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(419)) - Namenode Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 trying to claim ACTIVE state with txid=1 [exec] 2012-11-02 12:51:17,217 INFO datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(431)) - Acknowledging ACTIVE Namenode Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 [exec] 2012-11-02 12:51:17,221 INFO blockmanagement.BlockManager (BlockManager.java:processReport(1526)) - BLOCK* processReport: Received first block report from 127.0.0.1:45299 after becoming active. Its block contents are no longer considered stale [exec] 2012-11-02 12:51:17,222 INFO hdfs.StateChange (BlockManager.java:processReport(1539)) - BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-743789385-67.195.138.27-45299-1351860677132, infoPort=44178, ipcPort=53374, storageInfo=lv=-40;cid=testClusterID;nsid=71175640;c=0), blocks: 0, processing time: 1 msecs [exec] 2012-11-02 12:51:17,223 INFO datanode.DataNode (BPServiceActor.java:blockReport(409)) - BlockReport of 0 blocks took 1 msec to generate and 5 msecs for RPC and NN processing [exec] 2012-11-02 12:51:17,223 INFO datanode.DataNode (BPServiceActor.java:blockReport(428)) - sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@19ccb73 [exec] 2012-11-02 12:51:17,225 INFO datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:<init>(156)) - Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-714465857-67.195.138.27-1351860674434 [exec] 2012-11-02 12:51:17,229 INFO datanode.DataBlockScanner (DataBlockScanner.java:addBlockPool(248)) - Added bpid=BP-714465857-67.195.138.27-1351860674434 to blockPoolScannerMap, new size=1 [exec] 2012-11-02 12:51:17,311 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1864)) - Cluster is active [exec] # [exec] # A fatal error has been detected by the Java Runtime Environment: [exec] # [exec] # SIGSEGV (0xb) at pc=0xf6e97b68, pid=19749, tid=4136773328 [exec] # [exec] # JRE version: 6.0_26-b03 [exec] # Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode linux-x86 ) [exec] # Problematic frame: [exec] # V [libjvm.so+0x3efb68] unsigned+0xb8 [exec] # [exec] # An error report file with more information is saved as: [exec] # /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid19749.log [exec] # [exec] # If you would like to submit a bug report, please visit: [exec] # http://java.sun.com/webapps/bugreport/crash.jsp [exec] # [exec] Aborted [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS ................................ FAILURE [1:17:35.336s] [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1:17:36.108s [INFO] Finished at: Fri Nov 02 12:51:17 UTC 2012 [INFO] Final Memory: 18M/478M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (native_tests) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 134 -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException Build step 'Execute shell' marked build as failure Archiving artifacts Updating MAPREDUCE-4729 Updating MAPREDUCE-4746 Sending e-mails to: hdfs-dev@hadoop.apache.org Email was triggered for: Failure Sending email for trigger: Failure ################################################################################### ############################## FAILED TESTS (if any) ############################## No tests ran.