See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1212/changes>
Changes: [suresh] HDFS-4129. Add utility methods to dump NameNode in memory tree for testing. Contributed by Tsz Wo (Nicholas), SZE. [suresh] HDFS-3916. libwebhdfs testing code cleanup. Contributed by Jing Zhao. [umamahesh] Moved HDFS-3809 entry in CHANGES.txt from trunk to 2.0.3-alpha section [umamahesh] Moved HDFS-3789 entry in CHANGES.txt from trunk to 2.0.3-alpha section [umamahesh] Moved HDFS-3695 entry in CHANGES.txt from trunk to 2.0.3-alpha section [bobby] HADOOP-8986. Server$Call object is never released after it is sent (bobby) [umamahesh] Moved HDFS-3573 entry in CHANGES.txt from trunk to 2.0.3-alpha section [daryn] HADOOP-8994. TestDFSShell creates file named "noFileHere", making further tests hard to understand (Andy Isaacson via daryn) ------------------------------------------ [...truncated 11290 lines...] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec Running org.apache.hadoop.hdfs.TestSetTimes Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.359 sec Running org.apache.hadoop.hdfs.TestBlockReaderLocal Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.415 sec Running org.apache.hadoop.hdfs.TestHftpURLTimeouts Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.861 sec Running org.apache.hadoop.cli.TestHDFSCLI Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.035 sec Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.13 sec Running org.apache.hadoop.fs.TestGlobPaths Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.899 sec Running org.apache.hadoop.fs.TestResolveHdfsSymlink Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.804 sec Running org.apache.hadoop.fs.TestFcHdfsSetUMask Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.357 sec Running org.apache.hadoop.fs.TestFcHdfsPermission Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.226 sec Running org.apache.hadoop.fs.TestUrlStreamHandler Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.104 sec Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.75 sec Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.423 sec Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.952 sec Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.093 sec Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.594 sec Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.742 sec Running org.apache.hadoop.fs.permission.TestStickyBit Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.037 sec Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.411 sec Running org.apache.hadoop.fs.TestFcHdfsSymlink Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.445 sec Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.07 sec Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.216 sec Results : Tests run: 1610, Failures: 0, Errors: 0, Skipped: 4 [INFO] [INFO] --- maven-antrun-plugin:1.6:run (native_tests) @ hadoop-hdfs --- [INFO] Executing tasks main: [exec] 2012-10-31 12:51:45,621 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(319)) - starting cluster with 1 namenodes. [exec] Formatting using clusterid: testClusterID [exec] 2012-10-31 12:51:45,868 INFO util.HostsFileReader (HostsFileReader.java:refresh(82)) - Refreshing hosts (include/exclude) list [exec] 2012-10-31 12:51:45,870 WARN conf.Configuration (Configuration.java:warnOnceIfDeprecated(822)) - hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping [exec] 2012-10-31 12:51:45,870 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(188)) - dfs.block.invalidate.limit=1000 [exec] 2012-10-31 12:51:45,890 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294)) - dfs.block.access.token.enable=false [exec] 2012-10-31 12:51:45,891 INFO blockmanagement.BlockManager (BlockManager.java:<init>(280)) - defaultReplication = 1 [exec] 2012-10-31 12:51:45,891 INFO blockmanagement.BlockManager (BlockManager.java:<init>(281)) - maxReplication = 512 [exec] 2012-10-31 12:51:45,891 INFO blockmanagement.BlockManager (BlockManager.java:<init>(282)) - minReplication = 1 [exec] 2012-10-31 12:51:45,891 INFO blockmanagement.BlockManager (BlockManager.java:<init>(283)) - maxReplicationStreams = 2 [exec] 2012-10-31 12:51:45,891 INFO blockmanagement.BlockManager (BlockManager.java:<init>(284)) - shouldCheckForEnoughRacks = false [exec] 2012-10-31 12:51:45,891 INFO blockmanagement.BlockManager (BlockManager.java:<init>(285)) - replicationRecheckInterval = 3000 [exec] 2012-10-31 12:51:45,891 INFO blockmanagement.BlockManager (BlockManager.java:<init>(286)) - encryptDataTransfer = false [exec] 2012-10-31 12:51:45,891 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(473)) - fsOwner = jenkins (auth:SIMPLE) [exec] 2012-10-31 12:51:45,892 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(474)) - supergroup = supergroup [exec] 2012-10-31 12:51:45,892 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(475)) - isPermissionEnabled = true [exec] 2012-10-31 12:51:45,892 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(489)) - HA Enabled: false [exec] 2012-10-31 12:51:45,896 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(521)) - Append Enabled: true [exec] 2012-10-31 12:51:46,107 INFO namenode.NameNode (FSDirectory.java:<init>(143)) - Caching file names occuring more than 10 times [exec] 2012-10-31 12:51:46,108 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(3753)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033 [exec] 2012-10-31 12:51:46,109 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(3754)) - dfs.namenode.safemode.min.datanodes = 0 [exec] 2012-10-31 12:51:46,109 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(3755)) - dfs.namenode.safemode.extension = 0 [exec] 2012-10-31 12:51:47,214 INFO common.Storage (NNStorage.java:format(525)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1> has been successfully formatted. [exec] 2012-10-31 12:51:47,222 INFO common.Storage (NNStorage.java:format(525)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2> has been successfully formatted. [exec] 2012-10-31 12:51:47,233 INFO namenode.FSImage (FSImageFormat.java:save(494)) - Saving image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage.ckpt_0000000000000000000> using no compression [exec] 2012-10-31 12:51:47,233 INFO namenode.FSImage (FSImageFormat.java:save(494)) - Saving image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/fsimage.ckpt_0000000000000000000> using no compression [exec] 2012-10-31 12:51:47,243 INFO namenode.FSImage (FSImageFormat.java:save(521)) - Image file of size 122 saved in 0 seconds. [exec] 2012-10-31 12:51:47,245 INFO namenode.FSImage (FSImageFormat.java:save(521)) - Image file of size 122 saved in 0 seconds. [exec] 2012-10-31 12:51:47,268 INFO namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(171)) - Going to retain 1 images with txid >= 0 [exec] 2012-10-31 12:51:47,315 WARN impl.MetricsConfig (MetricsConfig.java:loadFirst(123)) - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties [exec] 2012-10-31 12:51:47,363 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(341)) - Scheduled snapshot period at 10 second(s). [exec] 2012-10-31 12:51:47,363 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(183)) - NameNode metrics system started [exec] 2012-10-31 12:51:47,376 INFO util.HostsFileReader (HostsFileReader.java:refresh(82)) - Refreshing hosts (include/exclude) list [exec] 2012-10-31 12:51:47,376 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(188)) - dfs.block.invalidate.limit=1000 [exec] 2012-10-31 12:51:47,390 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294)) - dfs.block.access.token.enable=false [exec] 2012-10-31 12:51:47,391 INFO blockmanagement.BlockManager (BlockManager.java:<init>(280)) - defaultReplication = 1 [exec] 2012-10-31 12:51:47,391 INFO blockmanagement.BlockManager (BlockManager.java:<init>(281)) - maxReplication = 512 [exec] 2012-10-31 12:51:47,391 INFO blockmanagement.BlockManager (BlockManager.java:<init>(282)) - minReplication = 1 [exec] 2012-10-31 12:51:47,391 INFO blockmanagement.BlockManager (BlockManager.java:<init>(283)) - maxReplicationStreams = 2 [exec] 2012-10-31 12:51:47,391 INFO blockmanagement.BlockManager (BlockManager.java:<init>(284)) - shouldCheckForEnoughRacks = false [exec] 2012-10-31 12:51:47,391 INFO blockmanagement.BlockManager (BlockManager.java:<init>(285)) - replicationRecheckInterval = 3000 [exec] 2012-10-31 12:51:47,391 INFO blockmanagement.BlockManager (BlockManager.java:<init>(286)) - encryptDataTransfer = false [exec] 2012-10-31 12:51:47,391 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(473)) - fsOwner = jenkins (auth:SIMPLE) [exec] 2012-10-31 12:51:47,392 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(474)) - supergroup = supergroup [exec] 2012-10-31 12:51:47,392 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(475)) - isPermissionEnabled = true [exec] 2012-10-31 12:51:47,392 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(489)) - HA Enabled: false [exec] 2012-10-31 12:51:47,392 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(521)) - Append Enabled: true [exec] 2012-10-31 12:51:47,393 INFO namenode.NameNode (FSDirectory.java:<init>(143)) - Caching file names occuring more than 10 times [exec] 2012-10-31 12:51:47,393 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(3753)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033 [exec] 2012-10-31 12:51:47,393 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(3754)) - dfs.namenode.safemode.min.datanodes = 0 [exec] 2012-10-31 12:51:47,393 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(3755)) - dfs.namenode.safemode.extension = 0 [exec] 2012-10-31 12:51:47,397 INFO common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/in_use.lock> acquired by nodename 26...@asf005.sp2.ygridcore.net [exec] 2012-10-31 12:51:47,403 INFO common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/in_use.lock> acquired by nodename 26...@asf005.sp2.ygridcore.net [exec] 2012-10-31 12:51:47,407 INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287)) - Recovering unfinalized segments in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current> [exec] 2012-10-31 12:51:47,407 INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287)) - Recovering unfinalized segments in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current> [exec] 2012-10-31 12:51:47,408 INFO namenode.FSImage (FSImage.java:loadFSImage(611)) - No edit log streams selected. [exec] 2012-10-31 12:51:47,416 INFO namenode.FSImage (FSImageFormat.java:load(167)) - Loading image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage_0000000000000000000> using no compression [exec] 2012-10-31 12:51:47,416 INFO namenode.FSImage (FSImageFormat.java:load(170)) - Number of files = 1 [exec] 2012-10-31 12:51:47,417 INFO namenode.FSImage (FSImageFormat.java:loadFilesUnderConstruction(358)) - Number of files under construction = 0 [exec] 2012-10-31 12:51:47,417 INFO namenode.FSImage (FSImageFormat.java:load(192)) - Image file of size 122 loaded in 0 seconds. [exec] 2012-10-31 12:51:47,417 INFO namenode.FSImage (FSImage.java:loadFSImage(754)) - Loaded image for txid 0 from <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage_0000000000000000000> [exec] 2012-10-31 12:51:47,422 INFO namenode.FSEditLog (FSEditLog.java:startLogSegment(949)) - Starting log segment at 1 [exec] 2012-10-31 12:51:47,602 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups [exec] 2012-10-31 12:51:47,602 INFO namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(441)) - Finished loading FSImage in 209 msecs [exec] 2012-10-31 12:51:47,731 INFO ipc.Server (Server.java:run(524)) - Starting Socket Reader #1 for port 52090 [exec] 2012-10-31 12:51:47,752 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4615)) - Registered FSNamesystemState MBean [exec] 2012-10-31 12:51:47,767 INFO namenode.FSNamesystem (FSNamesystem.java:getCompleteBlocksTotal(4307)) - Number of blocks under construction: 0 [exec] 2012-10-31 12:51:47,767 INFO namenode.FSNamesystem (FSNamesystem.java:initializeReplQueues(3858)) - initializing replication queues [exec] 2012-10-31 12:51:47,780 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2205)) - Total number of blocks = 0 [exec] 2012-10-31 12:51:47,780 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2206)) - Number of invalid blocks = 0 [exec] 2012-10-31 12:51:47,780 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2207)) - Number of under-replicated blocks = 0 [exec] 2012-10-31 12:51:47,780 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2208)) - Number of over-replicated blocks = 0 [exec] 2012-10-31 12:51:47,780 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2210)) - Number of blocks being written = 0 [exec] 2012-10-31 12:51:47,780 INFO hdfs.StateChange (FSNamesystem.java:initializeReplQueues(3863)) - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 13 msec [exec] 2012-10-31 12:51:47,780 INFO hdfs.StateChange (FSNamesystem.java:leave(3835)) - STATE* Leaving safe mode after 0 secs [exec] 2012-10-31 12:51:47,781 INFO hdfs.StateChange (FSNamesystem.java:leave(3845)) - STATE* Network topology has 0 racks and 0 datanodes [exec] 2012-10-31 12:51:47,781 INFO hdfs.StateChange (FSNamesystem.java:leave(3848)) - STATE* UnderReplicatedBlocks has 0 blocks [exec] 2012-10-31 12:51:47,833 INFO mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog [exec] 2012-10-31 12:51:47,882 INFO http.HttpServer (HttpServer.java:addGlobalFilter(505)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) [exec] 2012-10-31 12:51:47,883 INFO http.HttpServer (HttpServer.java:addFilter(483)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs [exec] 2012-10-31 12:51:47,884 INFO http.HttpServer (HttpServer.java:addFilter(490)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static [exec] 2012-10-31 12:51:47,887 INFO http.HttpServer (WebHdfsFileSystem.java:isEnabled(142)) - dfs.webhdfs.enabled = false [exec] 2012-10-31 12:51:47,893 INFO http.HttpServer (HttpServer.java:start(663)) - Jetty bound to port 60078 [exec] 2012-10-31 12:51:47,893 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26 [exec] 2012-10-31 12:51:48,056 INFO mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:60078 [exec] 2012-10-31 12:51:48,056 INFO namenode.NameNode (NameNode.java:setHttpServerAddress(395)) - Web-server up at: localhost:60078 [exec] 2012-10-31 12:51:48,057 INFO ipc.Server (Server.java:run(648)) - IPC Server listener on 52090: starting [exec] 2012-10-31 12:51:48,057 INFO ipc.Server (Server.java:run(817)) - IPC Server Responder: starting [exec] 2012-10-31 12:51:48,059 INFO namenode.NameNode (NameNode.java:startCommonServices(492)) - NameNode RPC up at: localhost/127.0.0.1:52090 [exec] 2012-10-31 12:51:48,059 INFO namenode.FSNamesystem (FSNamesystem.java:startActiveServices(647)) - Starting services required for active state [exec] 2012-10-31 12:51:48,062 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(1145)) - Starting DataNode 0 with dfs.datanode.data.dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1,file>:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> [exec] 2012-10-31 12:51:48,078 WARN util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [exec] 2012-10-31 12:51:48,089 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(151)) - DataNode metrics system started (again) [exec] 2012-10-31 12:51:48,089 INFO datanode.DataNode (DataNode.java:<init>(313)) - Configured hostname is 127.0.0.1 [exec] 2012-10-31 12:51:48,094 INFO datanode.DataNode (DataNode.java:initDataXceiver(539)) - Opened streaming server at /127.0.0.1:34776 [exec] 2012-10-31 12:51:48,096 INFO datanode.DataNode (DataXceiverServer.java:<init>(77)) - Balancing bandwith is 1048576 bytes/s [exec] 2012-10-31 12:51:48,097 INFO http.HttpServer (HttpServer.java:addGlobalFilter(505)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) [exec] 2012-10-31 12:51:48,098 INFO http.HttpServer (HttpServer.java:addFilter(483)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode [exec] 2012-10-31 12:51:48,098 INFO http.HttpServer (HttpServer.java:addFilter(490)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static [exec] 2012-10-31 12:51:48,099 INFO datanode.DataNode (DataNode.java:startInfoServer(365)) - Opened info server at localhost:0 [exec] 2012-10-31 12:51:48,101 INFO datanode.DataNode (WebHdfsFileSystem.java:isEnabled(142)) - dfs.webhdfs.enabled = false [exec] 2012-10-31 12:51:48,101 INFO http.HttpServer (HttpServer.java:start(663)) - Jetty bound to port 53237 [exec] 2012-10-31 12:51:48,101 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26 [exec] 2012-10-31 12:51:48,153 INFO mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:53237 [exec] 2012-10-31 12:51:48,159 INFO ipc.Server (Server.java:run(524)) - Starting Socket Reader #1 for port 47335 [exec] 2012-10-31 12:51:48,164 INFO datanode.DataNode (DataNode.java:initIpcServer(436)) - Opened IPC server at /127.0.0.1:47335 [exec] 2012-10-31 12:51:48,171 INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(148)) - Refresh request received for nameservices: null [exec] 2012-10-31 12:51:48,173 INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(193)) - Starting BPOfferServices for nameservices: <default> [exec] 2012-10-31 12:51:48,180 INFO datanode.DataNode (BPServiceActor.java:run(658)) - Block pool <registering> (storage id unknown) service to localhost/127.0.0.1:52090 starting to offer service [exec] 2012-10-31 12:51:48,184 INFO ipc.Server (Server.java:run(817)) - IPC Server Responder: starting [exec] 2012-10-31 12:51:48,184 INFO ipc.Server (Server.java:run(648)) - IPC Server listener on 47335: starting [exec] 2012-10-31 12:51:48,616 INFO common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 26...@asf005.sp2.ygridcore.net [exec] 2012-10-31 12:51:48,617 INFO common.Storage (DataStorage.java:recoverTransitionRead(162)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted [exec] 2012-10-31 12:51:48,617 INFO common.Storage (DataStorage.java:recoverTransitionRead(163)) - Formatting ... [exec] 2012-10-31 12:51:48,620 INFO common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 26...@asf005.sp2.ygridcore.net [exec] 2012-10-31 12:51:48,620 INFO common.Storage (DataStorage.java:recoverTransitionRead(162)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted [exec] 2012-10-31 12:51:48,621 INFO common.Storage (DataStorage.java:recoverTransitionRead(163)) - Formatting ... [exec] 2012-10-31 12:51:48,656 INFO common.Storage (Storage.java:lock(626)) - Locking is disabled [exec] 2012-10-31 12:51:48,656 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-380395973-67.195.138.27-1351687906118> is not formatted. [exec] 2012-10-31 12:51:48,656 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117)) - Formatting ... [exec] 2012-10-31 12:51:48,656 INFO common.Storage (BlockPoolSliceStorage.java:format(171)) - Formatting block pool BP-380395973-67.195.138.27-1351687906118 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-380395973-67.195.138.27-1351687906118/current> [exec] 2012-10-31 12:51:48,658 INFO common.Storage (Storage.java:lock(626)) - Locking is disabled [exec] 2012-10-31 12:51:48,659 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-380395973-67.195.138.27-1351687906118> is not formatted. [exec] 2012-10-31 12:51:48,659 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117)) - Formatting ... [exec] 2012-10-31 12:51:48,659 INFO common.Storage (BlockPoolSliceStorage.java:format(171)) - Formatting block pool BP-380395973-67.195.138.27-1351687906118 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-380395973-67.195.138.27-1351687906118/current> [exec] 2012-10-31 12:51:48,662 INFO datanode.DataNode (DataNode.java:initStorage(852)) - Setting up storage: nsid=273544147;bpid=BP-380395973-67.195.138.27-1351687906118;lv=-40;nsInfo=lv=-40;cid=testClusterID;nsid=273544147;c=0;bpid=BP-380395973-67.195.138.27-1351687906118 [exec] 2012-10-31 12:51:48,672 INFO impl.FsDatasetImpl (FsDatasetImpl.java:<init>(197)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current> [exec] 2012-10-31 12:51:48,672 INFO impl.FsDatasetImpl (FsDatasetImpl.java:<init>(197)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current> [exec] 2012-10-31 12:51:48,678 INFO impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(1209)) - Registered FSDatasetState MBean [exec] 2012-10-31 12:51:48,682 INFO datanode.DirectoryScanner (DirectoryScanner.java:start(243)) - Periodic Directory Tree Verification scan starting at 1351693685682 with interval 21600000 [exec] 2012-10-31 12:51:48,683 INFO impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(1577)) - Adding block pool BP-380395973-67.195.138.27-1351687906118 [exec] 2012-10-31 12:51:48,689 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1833)) - Waiting for cluster to become active [exec] 2012-10-31 12:51:48,691 INFO datanode.DataNode (BPServiceActor.java:register(618)) - Block pool BP-380395973-67.195.138.27-1351687906118 (storage id DS-1928486877-67.195.138.27-34776-1351687908623) service to localhost/127.0.0.1:52090 beginning handshake with NN [exec] 2012-10-31 12:51:48,693 INFO hdfs.StateChange (DatanodeManager.java:registerDatanode(661)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1, storageID=DS-1928486877-67.195.138.27-34776-1351687908623, infoPort=53237, ipcPort=47335, storageInfo=lv=-40;cid=testClusterID;nsid=273544147;c=0) storage DS-1928486877-67.195.138.27-34776-1351687908623 [exec] 2012-10-31 12:51:48,695 INFO net.NetworkTopology (NetworkTopology.java:add(388)) - Adding a new node: /default-rack/127.0.0.1:34776 [exec] 2012-10-31 12:51:48,697 INFO datanode.DataNode (BPServiceActor.java:register(631)) - Block pool Block pool BP-380395973-67.195.138.27-1351687906118 (storage id DS-1928486877-67.195.138.27-34776-1351687908623) service to localhost/127.0.0.1:52090 successfully registered with NN [exec] 2012-10-31 12:51:48,697 INFO datanode.DataNode (BPServiceActor.java:offerService(499)) - For namenode localhost/127.0.0.1:52090 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000 [exec] 2012-10-31 12:51:48,700 INFO datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(419)) - Namenode Block pool BP-380395973-67.195.138.27-1351687906118 (storage id DS-1928486877-67.195.138.27-34776-1351687908623) service to localhost/127.0.0.1:52090 trying to claim ACTIVE state with txid=1 [exec] 2012-10-31 12:51:48,701 INFO datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(431)) - Acknowledging ACTIVE Namenode Block pool BP-380395973-67.195.138.27-1351687906118 (storage id DS-1928486877-67.195.138.27-34776-1351687908623) service to localhost/127.0.0.1:52090 [exec] 2012-10-31 12:51:48,704 INFO blockmanagement.BlockManager (BlockManager.java:processReport(1526)) - BLOCK* processReport: Received first block report from 127.0.0.1:34776 after becoming active. Its block contents are no longer considered stale [exec] 2012-10-31 12:51:48,705 INFO hdfs.StateChange (BlockManager.java:processReport(1539)) - BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-1928486877-67.195.138.27-34776-1351687908623, infoPort=53237, ipcPort=47335, storageInfo=lv=-40;cid=testClusterID;nsid=273544147;c=0), blocks: 0, processing time: 2 msecs [exec] 2012-10-31 12:51:48,706 INFO datanode.DataNode (BPServiceActor.java:blockReport(409)) - BlockReport of 0 blocks took 0 msec to generate and 5 msecs for RPC and NN processing [exec] 2012-10-31 12:51:48,706 INFO datanode.DataNode (BPServiceActor.java:blockReport(428)) - sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@189d7eb [exec] 2012-10-31 12:51:48,708 INFO datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:<init>(156)) - Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-380395973-67.195.138.27-1351687906118 [exec] 2012-10-31 12:51:48,712 INFO datanode.DataBlockScanner (DataBlockScanner.java:addBlockPool(248)) - Added bpid=BP-380395973-67.195.138.27-1351687906118 to blockPoolScannerMap, new size=1 [exec] 2012-10-31 12:51:48,794 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1864)) - Cluster is active [exec] # [exec] # A fatal error has been detected by the Java Runtime Environment: [exec] # [exec] # SIGSEGV (0xb) at pc=0xf6ee5b68, pid=26703, tid=4137092816 [exec] # [exec] # JRE version: 6.0_26-b03 [exec] # Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode linux-x86 ) [exec] # Problematic frame: [exec] # V [libjvm.so+0x3efb68] unsigned+0xb8 [exec] # [exec] # An error report file with more information is saved as: [exec] # <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid26703.log> [exec] Aborted [exec] # [exec] # If you would like to submit a bug report, please visit: [exec] # http://java.sun.com/webapps/bugreport/crash.jsp [exec] # [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS ................................ FAILURE [1:18:35.682s] [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1:18:36.461s [INFO] Finished at: Wed Oct 31 12:51:49 UTC 2012 [INFO] Final Memory: 23M/350M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (native_tests) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 134 -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException Build step 'Execute shell' marked build as failure Archiving artifacts Updating HDFS-4129 Updating HADOOP-8994 Updating HDFS-3573 Updating HDFS-3789 Updating HADOOP-8986 Updating HDFS-3916 Updating HDFS-3695 Updating HDFS-3809