See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1213/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE 
###########################
[...truncated 11481 lines...]
     [exec] 2012-11-01 12:51:31,815 INFO  datanode.DataNode 
(BPServiceActor.java:offerService(499)) - For namenode 
localhost/127.0.0.1:40223 using DELETEREPORT_INTERVAL of 300000 msec  
BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; 
heartBeatInterval=3000
     [exec] 2012-11-01 12:51:31,819 INFO  datanode.DataNode 
(BPOfferService.java:updateActorStatesFromHeartbeat(419)) - Namenode Block pool 
BP-1372242316-67.195.138.27-1351774289159 (storage id 
DS-956326259-67.195.138.27-45280-1351774291735) service to 
localhost/127.0.0.1:40223 trying to claim ACTIVE state with txid=1
     [exec] 2012-11-01 12:51:31,819 INFO  datanode.DataNode 
(BPOfferService.java:updateActorStatesFromHeartbeat(431)) - Acknowledging 
ACTIVE Namenode Block pool BP-1372242316-67.195.138.27-1351774289159 (storage 
id DS-956326259-67.195.138.27-45280-1351774291735) service to 
localhost/127.0.0.1:40223
     [exec] 2012-11-01 12:51:31,823 INFO  blockmanagement.BlockManager 
(BlockManager.java:processReport(1526)) - BLOCK* processReport: Received first 
block report from 127.0.0.1:45280 after becoming active. Its block contents are 
no longer considered stale
     [exec] 2012-11-01 12:51:31,824 INFO  hdfs.StateChange 
(BlockManager.java:processReport(1539)) - BLOCK* processReport: from 
DatanodeRegistration(127.0.0.1, 
storageID=DS-956326259-67.195.138.27-45280-1351774291735, infoPort=55286, 
ipcPort=42421, storageInfo=lv=-40;cid=testClusterID;nsid=1188264114;c=0), 
blocks: 0, processing time: 2 msecs
     [exec] 2012-11-01 12:51:31,825 INFO  datanode.DataNode 
(BPServiceActor.java:blockReport(409)) - BlockReport of 0 blocks took 1 msec to 
generate and 5 msecs for RPC and NN processing
     [exec] 2012-11-01 12:51:31,825 INFO  datanode.DataNode 
(BPServiceActor.java:blockReport(428)) - sent block report, processed 
command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@1277a30
     [exec] 2012-11-01 12:51:31,827 INFO  datanode.BlockPoolSliceScanner 
(BlockPoolSliceScanner.java:<init>(156)) - Periodic Block Verification Scanner 
initialized with interval 504 hours for block pool 
BP-1372242316-67.195.138.27-1351774289159
     [exec] 2012-11-01 12:51:31,831 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:addBlockPool(248)) - Added 
bpid=BP-1372242316-67.195.138.27-1351774289159 to blockPoolScannerMap, new 
size=1
     [exec] Aborted
     [exec] 2012-11-01 12:51:31,913 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:waitActive(1864)) - Cluster is active
     [exec] #
     [exec] # A fatal error has been detected by the Java Runtime Environment:
     [exec] #
     [exec] #  SIGSEGV (0xb) at pc=0xf6ee9b68, pid=14319, tid=4137109200
     [exec] #
     [exec] # JRE version: 6.0_26-b03
     [exec] # Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode 
linux-x86 )
     [exec] # Problematic frame:
     [exec] # V  [libjvm.so+0x3efb68]  unsigned+0xb8
     [exec] #
     [exec] # An error report file with more information is saved as:
     [exec] # 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid14319.log
     [exec] #
     [exec] # If you would like to submit a bug report, please visit:
     [exec] #   http://java.sun.com/webapps/bugreport/crash.jsp
     [exec] #
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE 
[1:18:17.144s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:18:17.929s
[INFO] Finished at: Thu Nov 01 12:51:32 UTC 2012
[INFO] Final Memory: 26M/491M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.6:run (native_tests) on project 
hadoop-hdfs: An Ant BuildException has occured: exec returned: 134 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4752
Updating MAPREDUCE-4724
Updating YARN-165
Updating YARN-166
Updating YARN-189
Updating YARN-159
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) 
##############################
No tests ran.

Reply via email to