See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2128/changes>
Changes: [arp] HDFS-8157. Writes to RAM DISK reserve locked memory for block files. (Arpit Agarwal) [aajisaka] HADOOP-11988. Fix typo in the document for hadoop fs -find. Contributed by Kengo Seki. ------------------------------------------ [...truncated 7869 lines...] [exec] 2015-05-17 14:17:24,109 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(678)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) [exec] 2015-05-17 14:17:24,110 INFO http.HttpServer2 (HttpServer2.java:addFilter(653)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode [exec] 2015-05-17 14:17:24,111 INFO http.HttpServer2 (HttpServer2.java:addFilter(661)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static [exec] 2015-05-17 14:17:24,113 INFO http.HttpServer2 (HttpServer2.java:openListeners(883)) - Jetty bound to port 43065 [exec] 2015-05-17 14:17:24,113 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26 [exec] 2015-05-17 14:17:24,168 INFO mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:43065 [exec] 2015-05-17 14:17:24,294 INFO web.DatanodeHttpServer (DatanodeHttpServer.java:start(150)) - Listening HTTP traffic on /127.0.0.1:36594 [exec] 2015-05-17 14:17:24,296 INFO datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins [exec] 2015-05-17 14:17:24,296 INFO datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup [exec] 2015-05-17 14:17:24,309 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue [exec] 2015-05-17 14:17:24,310 INFO ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 41644 [exec] 2015-05-17 14:17:24,317 INFO datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:41644 [exec] 2015-05-17 14:17:24,329 INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null [exec] 2015-05-17 14:17:24,331 INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default> [exec] 2015-05-17 14:17:24,341 INFO datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:52051 starting to offer service [exec] 2015-05-17 14:17:24,347 INFO ipc.Server (Server.java:run(852)) - IPC Server Responder: starting [exec] 2015-05-17 14:17:24,348 INFO ipc.Server (Server.java:run(692)) - IPC Server listener on 41644: starting [exec] 2015-05-17 14:17:24,574 INFO common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 32...@asf909.gq1.ygridcore.net [exec] 2015-05-17 14:17:24,574 INFO common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-1054975985-67.195.81.153-1431872242456 [exec] 2015-05-17 14:17:24,574 INFO common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ... [exec] 2015-05-17 14:17:24,614 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1054975985-67.195.81.153-1431872242456 [exec] 2015-05-17 14:17:24,614 INFO common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1054975985-67.195.81.153-1431872242456> [exec] 2015-05-17 14:17:24,615 INFO common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1054975985-67.195.81.153-1431872242456> is not formatted for BP-1054975985-67.195.81.153-1431872242456 [exec] 2015-05-17 14:17:24,615 INFO common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ... [exec] 2015-05-17 14:17:24,615 INFO common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1054975985-67.195.81.153-1431872242456 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1054975985-67.195.81.153-1431872242456/current> [exec] 2015-05-17 14:17:24,617 INFO common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 32...@asf909.gq1.ygridcore.net [exec] 2015-05-17 14:17:24,617 INFO common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-1054975985-67.195.81.153-1431872242456 [exec] 2015-05-17 14:17:24,617 INFO common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ... [exec] 2015-05-17 14:17:24,653 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1054975985-67.195.81.153-1431872242456 [exec] 2015-05-17 14:17:24,653 INFO common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1054975985-67.195.81.153-1431872242456> [exec] 2015-05-17 14:17:24,653 INFO common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1054975985-67.195.81.153-1431872242456> is not formatted for BP-1054975985-67.195.81.153-1431872242456 [exec] 2015-05-17 14:17:24,653 INFO common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ... [exec] 2015-05-17 14:17:24,654 INFO common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1054975985-67.195.81.153-1431872242456 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1054975985-67.195.81.153-1431872242456/current> [exec] 2015-05-17 14:17:24,655 INFO datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=629540541;bpid=BP-1054975985-67.195.81.153-1431872242456;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=629540541;c=0;bpid=BP-1054975985-67.195.81.153-1431872242456;dnuuid=null [exec] 2015-05-17 14:17:24,657 INFO datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID d2fa2be2-3d17-418d-b02a-270032114a57 [exec] 2015-05-17 14:17:24,678 INFO impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-88d3336e-9e69-4f45-80b6-af74e44e5ff2 [exec] 2015-05-17 14:17:24,678 INFO impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK [exec] 2015-05-17 14:17:24,679 INFO impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-5881e9a8-64f4-434a-b980-ed2a486ecf63 [exec] 2015-05-17 14:17:24,679 INFO impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK [exec] 2015-05-17 14:17:24,682 INFO impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean [exec] 2015-05-17 14:17:24,688 INFO datanode.DirectoryScanner (DirectoryScanner.java:start(332)) - Periodic Directory Tree Verification scan starting at 1431878455688 with interval 21600000 [exec] 2015-05-17 14:17:24,689 INFO impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-1054975985-67.195.81.153-1431872242456 [exec] 2015-05-17 14:17:24,690 INFO impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...> [exec] 2015-05-17 14:17:24,690 INFO impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...> [exec] 2015-05-17 14:17:24,702 INFO impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1054975985-67.195.81.153-1431872242456 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 12ms [exec] 2015-05-17 14:17:24,702 INFO impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1054975985-67.195.81.153-1431872242456 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 12ms [exec] 2015-05-17 14:17:24,703 INFO impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-1054975985-67.195.81.153-1431872242456: 13ms [exec] 2015-05-17 14:17:24,703 INFO impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...> [exec] 2015-05-17 14:17:24,704 INFO impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1054975985-67.195.81.153-1431872242456/current/replicas> doesn't exist [exec] 2015-05-17 14:17:24,704 INFO impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 0ms [exec] 2015-05-17 14:17:24,704 INFO impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...> [exec] 2015-05-17 14:17:24,704 INFO impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1054975985-67.195.81.153-1431872242456/current/replicas> doesn't exist [exec] 2015-05-17 14:17:24,704 INFO impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms [exec] 2015-05-17 14:17:24,704 INFO impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 2ms [exec] 2015-05-17 14:17:24,706 INFO datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051 beginning handshake with NN [exec] 2015-05-17 14:17:24,714 INFO hdfs.StateChange (DatanodeManager.java:registerDatanode(883)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:41243, datanodeUuid=d2fa2be2-3d17-418d-b02a-270032114a57, infoPort=36594, infoSecurePort=0, ipcPort=41644, storageInfo=lv=-56;cid=testClusterID;nsid=629540541;c=0) storage d2fa2be2-3d17-418d-b02a-270032114a57 [exec] 2015-05-17 14:17:24,714 INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0 [exec] 2015-05-17 14:17:24,715 INFO net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:41243 [exec] 2015-05-17 14:17:24,720 INFO datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051 successfully registered with NN [exec] 2015-05-17 14:17:24,720 INFO datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:52051 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000 [exec] 2015-05-17 14:17:24,726 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2332)) - No heartbeat from DataNode: 127.0.0.1:41243 [exec] 2015-05-17 14:17:24,727 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active [exec] 2015-05-17 14:17:24,736 INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0 [exec] 2015-05-17 14:17:24,736 INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-88d3336e-9e69-4f45-80b6-af74e44e5ff2 for DN 127.0.0.1:41243 [exec] 2015-05-17 14:17:24,738 INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-5881e9a8-64f4-434a-b980-ed2a486ecf63 for DN 127.0.0.1:41243 [exec] 2015-05-17 14:17:24,748 INFO datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051 trying to claim ACTIVE state with txid=1 [exec] 2015-05-17 14:17:24,749 INFO datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051 [exec] 2015-05-17 14:17:24,762 INFO blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-5881e9a8-64f4-434a-b980-ed2a486ecf63 from datanode d2fa2be2-3d17-418d-b02a-270032114a57 [exec] 2015-05-17 14:17:24,763 INFO BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-5881e9a8-64f4-434a-b980-ed2a486ecf63 node DatanodeRegistration(127.0.0.1:41243, datanodeUuid=d2fa2be2-3d17-418d-b02a-270032114a57, infoPort=36594, infoSecurePort=0, ipcPort=41644, storageInfo=lv=-56;cid=testClusterID;nsid=629540541;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs [exec] 2015-05-17 14:17:24,763 INFO blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-88d3336e-9e69-4f45-80b6-af74e44e5ff2 from datanode d2fa2be2-3d17-418d-b02a-270032114a57 [exec] 2015-05-17 14:17:24,763 INFO BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-88d3336e-9e69-4f45-80b6-af74e44e5ff2 node DatanodeRegistration(127.0.0.1:41243, datanodeUuid=d2fa2be2-3d17-418d-b02a-270032114a57, infoPort=36594, infoSecurePort=0, ipcPort=41644, storageInfo=lv=-56;cid=testClusterID;nsid=629540541;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs [exec] 2015-05-17 14:17:24,780 INFO datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0x502b851717ac4bb, containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 4 msec to generate and 28 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. [exec] 2015-05-17 14:17:24,781 INFO datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-1054975985-67.195.81.153-1431872242456 [exec] 2015-05-17 14:17:24,832 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active [exec] 2015-05-17 14:17:24,843 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster [exec] 2015-05-17 14:17:24,843 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0 [exec] 2015-05-17 14:17:24,843 WARN datanode.DirectoryScanner (DirectoryScanner.java:shutdown(378)) - DirectoryScanner: shutdown has been called [exec] 2015-05-17 14:17:24,843 INFO datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers. [exec] 2015-05-17 14:17:24,845 INFO mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0 [exec] 2015-05-17 14:17:25,213 INFO ipc.Server (Server.java:stop(2569)) - Stopping server on 41644 [exec] 2015-05-17 14:17:25,214 INFO ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 41644 [exec] 2015-05-17 14:17:25,215 WARN datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051 interrupted [exec] 2015-05-17 14:17:25,215 INFO ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder [exec] 2015-05-17 14:17:25,215 WARN datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051 [exec] 2015-05-17 14:17:25,317 INFO datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) [exec] 2015-05-17 14:17:25,317 INFO impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-1054975985-67.195.81.153-1431872242456 [exec] 2015-05-17 14:17:25,319 INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads [exec] 2015-05-17 14:17:25,319 INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down [exec] 2015-05-17 14:17:25,319 INFO impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads [exec] 2015-05-17 14:17:25,320 INFO impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down [exec] 2015-05-17 14:17:25,325 INFO datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete. [exec] 2015-05-17 14:17:25,325 INFO namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state [exec] 2015-05-17 14:17:25,326 INFO namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1 [exec] 2015-05-17 14:17:25,326 INFO namenode.FSNamesystem (FSNamesystem.java:run(4405)) - LazyPersistFileScrubber was interrupted, exiting [exec] 2015-05-17 14:17:25,326 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 1 1 [exec] 2015-05-17 14:17:25,327 INFO namenode.FSNamesystem (FSNamesystem.java:run(4325)) - NameNodeEditLogRoller was interrupted, exiting [exec] 2015-05-17 14:17:25,328 INFO namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002> [exec] 2015-05-17 14:17:25,329 INFO namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002> [exec] 2015-05-17 14:17:25,331 INFO ipc.Server (Server.java:stop(2569)) - Stopping server on 52051 [exec] 2015-05-17 14:17:25,333 INFO ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 52051 [exec] 2015-05-17 14:17:25,333 INFO blockmanagement.BlockManager (BlockManager.java:run(3686)) - Stopping ReplicationMonitor. [exec] 2015-05-17 14:17:25,334 INFO ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder [exec] 2015-05-17 14:17:25,368 INFO namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state [exec] 2015-05-17 14:17:25,368 INFO namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1311)) - Stopping services started for standby state [exec] 2015-05-17 14:17:25,370 INFO mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0 [exec] 2015-05-17 14:17:25,471 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system... [exec] 2015-05-17 14:17:25,472 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped. [exec] 2015-05-17 14:17:25,472 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete. [echo] Finished test_native_mini_dfs [INFO] Executed tasks [INFO] [INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs --- [INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar> [INFO] [INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs --- [INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar> [INFO] [INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>> [INFO] [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs --- [INFO] [INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<< [INFO] [INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs --- [INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar> [INFO] [INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>> [INFO] [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs --- [INFO] [INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<< [INFO] [INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs --- [INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar> [INFO] [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs --- [INFO] Fork Value is true [INFO] Done FindBugs Analysis.... [INFO] [INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs --- [INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar [INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar [INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar [INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar> [INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar> [INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar> [INFO] [INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs --- [INFO] Executing tasks main: [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src> [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Skipping Apache Hadoop HttpFS [INFO] This project has been banned from the build due to previous failures. [INFO] ------------------------------------------------------------------------ [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Skipping Apache Hadoop HDFS BookKeeper Journal [INFO] This project has been banned from the build due to previous failures. [INFO] ------------------------------------------------------------------------ [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Skipping Apache Hadoop HDFS-NFS [INFO] This project has been banned from the build due to previous failures. [INFO] ------------------------------------------------------------------------ [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project --- [INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target> [INFO] [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project --- [INFO] Executing tasks main: [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir> [INFO] Executed tasks [INFO] [INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project --- [INFO] Skipping javadoc generation [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project --- [INFO] [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project --- [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.224 s] [INFO] Apache Hadoop HDFS ................................ FAILURE [ 02:44 h] [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.053 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 02:45 h [INFO] Finished at: 2015-05-17T14:19:46+00:00 [INFO] Final Memory: 67M/695M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist. [ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:121 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml> [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :hadoop-hdfs Build step 'Execute shell' marked build as failure Archiving artifacts Sending artifact delta relative to Hadoop-Hdfs-trunk #2116 Archived 1 artifacts Archive block size is 32768 Received 0 blocks and 362788 bytes Compression is 0.0% Took 11 sec Recording test results Updating HDFS-8157 Updating HADOOP-11988