Hadoop-Hdfs-22-branch - Build # 42 - Failure
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/42/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 3322 lines...] compile-hdfs-test: [delete] Deleting directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache run-test-hdfs-excluding-commit-and-smoke: [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf [junit] WARNING: multiple versions of ant detected in path for junit [junit] jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class [junit] and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class [junit] Running org.apache.hadoop.fs.TestFiListPath [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 2.129 sec [junit] Running org.apache.hadoop.fs.TestFiRename [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 5.5 sec [junit] Running org.apache.hadoop.hdfs.TestFiHFlush [junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 16.003 sec [junit] Running org.apache.hadoop.hdfs.TestFiHftp [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 36.507 sec [junit] Running org.apache.hadoop.hdfs.TestFiPipelines [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.39 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol [junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 211.657 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2 [junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 460.215 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.383 sec checkfailure: -run-test-hdfs-fault-inject-withtestcaseonly: run-test-hdfs-fault-inject: BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:745: Tests failed! Total time: 50 minutes 59 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Publishing Javadoc Archiving artifacts Recording test results Recording fingerprints Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 2 tests failed. REGRESSION: org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0 Error Message: 127.0.0.1:40326is not an underUtilized node Stack Trace: junit.framework.AssertionFailedError: 127.0.0.1:40326is not an underUtilized node at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1011) at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953) at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1496) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247) at org.apache.hadoop.hdfs.server
Hadoop-Hdfs-trunk - Build # 663 - Still Failing
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/663/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 802087 lines...] [junit] [junit] 2011-05-11 12:34:58,072 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder [junit] 2011-05-11 12:34:58,072 INFO datanode.DataNode (DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2011-05-11 12:34:58,073 WARN datanode.DataNode (DataNode.java:offerService(1065)) - BPOfferService for block pool=BP-417633656-127.0.1.1-1305117296637 received exception:java.lang.InterruptedException [junit] 2011-05-11 12:34:58,073 WARN datanode.DataNode (DataNode.java:run(1218)) - DatanodeRegistration(127.0.0.1:59589, storageID=DS-1005986894-127.0.1.1-59589-1305117297348, infoPort=40983, ipcPort=44207, storageInfo=lv=-35;cid=testClusterID;nsid=694291886;c=0) ending block pool service for: BP-417633656-127.0.1.1-1305117296637 [junit] 2011-05-11 12:34:58,073 INFO datanode.DataBlockScanner (DataBlockScanner.java:removeBlockPool(277)) - Removed bpid=BP-417633656-127.0.1.1-1305117296637 from blockPoolScannerMap [junit] 2011-05-11 12:34:58,073 INFO datanode.DataNode (FSDataset.java:shutdownBlockPool(2560)) - Removing block pool BP-417633656-127.0.1.1-1305117296637 [junit] 2011-05-11 12:34:58,073 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-05-11 12:34:58,073 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-05-11 12:34:58,074 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1041)) - Shutting down DataNode 0 [junit] 2011-05-11 12:34:58,074 WARN datanode.DirectoryScanner (DirectoryScanner.java:shutdown(297)) - DirectoryScanner: shutdown has been called [junit] 2011-05-11 12:34:58,074 INFO datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:startNewPeriod(591)) - Starting a new period : work left in prev period : 0.00% [junit] 2011-05-11 12:34:58,175 INFO ipc.Server (Server.java:stop(1629)) - Stopping server on 52374 [junit] 2011-05-11 12:34:58,175 INFO ipc.Server (Server.java:run(1464)) - IPC Server handler 0 on 52374: exiting [junit] 2011-05-11 12:34:58,175 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 52374 [junit] 2011-05-11 12:34:58,175 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder [junit] 2011-05-11 12:34:58,175 INFO datanode.DataNode (DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2011-05-11 12:34:58,175 WARN datanode.DataNode (DataXceiverServer.java:run(143)) - 127.0.0.1:44206:DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:136) [junit] at java.lang.Thread.run(Thread.java:662) [junit] [junit] 2011-05-11 12:34:58,177 INFO datanode.DataNode (DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-05-11 12:34:58,178 WARN datanode.DataNode (DataNode.java:offerService(1065)) - BPOfferService for block pool=BP-417633656-127.0.1.1-1305117296637 received exception:java.lang.InterruptedException [junit] 2011-05-11 12:34:58,178 WARN datanode.DataNode (DataNode.java:run(1218)) - DatanodeRegistration(127.0.0.1:44206, storageID=DS-964847194-127.0.1.1-44206-1305117297216, infoPort=55212, ipcPort=52374, storageInfo=lv=-35;cid=testClusterID;nsid=694291886;c=0) ending block pool service for: BP-417633656-127.0.1.1-1305117296637 [junit] 2011-05-11 12:34:58,278 INFO datanode.DataBlockScanner (DataBlockScanner.java:removeBlockPool(277)) - Removed bpid=BP-417633656-127.0.1.1-1305117296637 from blockPoolScannerMap [junit] 2011-05-11 12:34:58,278 INFO datanode.DataNode (FSDataset.java:shutdownBlockPool(2560)) - Removing block pool BP-417633656-127.0.1.1-1305117296637 [junit] 2011-05-11 12:34:58,278 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-05-11 12:34:58,279 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-05-11 12:34:58,379 WARN namenode.FSName
[jira] [Created] (HDFS-1916) Fake jira for illustrating workflow (sorry)
Fake jira for illustrating workflow (sorry) --- Key: HDFS-1916 URL: https://issues.apache.org/jira/browse/HDFS-1916 Project: Hadoop HDFS Issue Type: Task Components: documentation Affects Versions: 0.21.0, 0.20.2, 0.23.0 Reporter: Todd Lipcon Priority: Trivial The namenode explodes when it eats too much. Steps to reproduce: a) eat too much. b) explode -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1917) Clean up duplication of dependent jar files
Clean up duplication of dependent jar files --- Key: HDFS-1917 URL: https://issues.apache.org/jira/browse/HDFS-1917 Project: Hadoop HDFS Issue Type: Bug Components: build Affects Versions: 0.23.0 Environment: Java 6, RHEL 5.5 Reporter: Eric Yang For trunk, the build and deployment tree look like this: hadoop-common-0.2x.y hadoop-hdfs-0.2x.y hadoop-mapred-0.2x.y Technically, hdfs's the third party dependent jar files should be fetch from hadoop-common. However, it is currently fetching from hadoop-hdfs/lib only. It would be nice to eliminate the need to repeat duplicated jar files at build time. There are two options to manage this dependency list, continue to enhance ant build structure to fetch and filter jar file dependencies using ivy. On the other hand, it would be a good opportunity to convert the build structure to maven, and use maven to manage the provided jar files. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1918) DataXceiver double logs every IOE out of readBlock
DataXceiver double logs every IOE out of readBlock -- Key: HDFS-1918 URL: https://issues.apache.org/jira/browse/HDFS-1918 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 0.20.2 Reporter: Jean-Daniel Cryans Priority: Trivial Fix For: 0.22.0 DataXceiver will log an IOE twice because opReadBlock() will catch it, log a WARN, then throw it again only to be caught in run() as a Throwable and logged as an ERROR. As far as I can tell all the information is the same in both messages. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-1062) Improve error messages for failed completeFile
[ https://issues.apache.org/jira/browse/HDFS-1062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh resolved HDFS-1062. -- Resolution: Duplicate According to Todd, this is already completed by HDFS-1141 > Improve error messages for failed completeFile > -- > > Key: HDFS-1062 > URL: https://issues.apache.org/jira/browse/HDFS-1062 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs client, name-node >Reporter: Todd Lipcon >Assignee: Jonathan Hsieh > Labels: newbie > > In practice I often see users confused by the cryptic error message "failed > to complete PATH because dir.getFileBlocks() is null and pendingFile is null" > (I wonder why!) The most common cause of this seems to be that another user > deleted the file (or its containing directory) while the writer was in > progress. > We should at least improve the error message on the NN side. Even better > would be to expose the error message through the IOException passed over the > RPC boundary to the client. > Including a message like "(another client may have removed the file or its > containing directory)" should do the trick. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-1022) Merge under-10-min tests specs into one file
[ https://issues.apache.org/jira/browse/HDFS-1022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins resolved HDFS-1022. --- Resolution: Fixed Fix Version/s: 0.20.203.0 Resolving as fixed. This jira doesn't apply post project split. > Merge under-10-min tests specs into one file > > > Key: HDFS-1022 > URL: https://issues.apache.org/jira/browse/HDFS-1022 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 0.20.1 >Reporter: Erik Steffl >Assignee: Erik Steffl > Fix For: 0.20.203.0, 0.20.1 > > Attachments: jira.HDFS-1022.branch-0.20.1xx.patch > > > Build target test-commit test target invokes macro-test-runner three times > with three different files. This is a problem because macro-test-runner > deletes logs before each run. > The proposed solution is to merge all tests (common, hdfs, mapred) into one > files since it doesn't seem to be possible to call macro-test-runner with > three files as argument (or to change macro-test-runner to make it possible). -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Hadoop-Hdfs-trunk-Commit - Build # 638 - Failure
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/638/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 2787 lines...] [javac] required: org.apache.hadoop.security.token.Token [javac] (Token) tokenList.get(0)); [javac] ^ [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] 2 warnings [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache run-commit-test: [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/logs [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/extraconf [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/extraconf [junit] WARNING: multiple versions of ant detected in path for junit [junit] jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class [junit] and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class [junit] Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery [junit] Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec [junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED (timeout) [junit] Running org.apache.hadoop.hdfs.server.datanode.TestDataDirs [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.555 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.528 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestINodeFile [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.263 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestNNLeaseRecovery [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 3.629 sec checkfailure: [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:700: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:663: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:731: Tests failed! Total time: 15 minutes 30 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Recording fingerprints Archiving artifacts Recording test results Publishing Javadoc Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 1 tests failed. REGRESSION: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testErrorReplicas Error Messa
[jira] [Created] (HDFS-1919) Upgrade to federated namespace fails
Upgrade to federated namespace fails Key: HDFS-1919 URL: https://issues.apache.org/jira/browse/HDFS-1919 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.23.0 Reporter: Todd Lipcon Priority: Blocker Fix For: 0.23.0 I formatted a namenode running off 0.22 branch, and trying to start it on trunk yields: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/name1 is in an inconsistent state: file VERSION has clusterID mising. It looks like 0.22 has LAYOUT_VERSION -33, but trunk has LAST_PRE_FEDERATION_LAYOUT_VERSION = -30, which is incorrect. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-1897) Documention refers to removed option dfs.network.script
[ https://issues.apache.org/jira/browse/HDFS-1897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon resolved HDFS-1897. --- Resolution: Fixed Hadoop Flags: [Reviewed] Committed to trunk and 0.22, thanks Andrew! > Documention refers to removed option dfs.network.script > --- > > Key: HDFS-1897 > URL: https://issues.apache.org/jira/browse/HDFS-1897 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 0.20.2, 0.21.0, 0.22.0 >Reporter: Ari Rabkin >Assignee: Andrew Whang >Priority: Minor > Labels: newbie > Fix For: 0.22.0 > > Attachments: HDFS-1897.patch > > > The HDFS user guide tells users to use dfs.network.script for rack awareness. > In fact, this option has been removed and using it will trigger a fatal error > on DataNode startup. Documentation should describe the current rack awareness > configuration system. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1920) libhdfs does not build for ARM processors
libhdfs does not build for ARM processors - Key: HDFS-1920 URL: https://issues.apache.org/jira/browse/HDFS-1920 Project: Hadoop HDFS Issue Type: Bug Components: contrib/libhdfs Affects Versions: 0.21.0 Environment: $ gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/arm-linux-gnueabi/gcc/arm-linux-gnueabi/4.5.2/lto-wrapper Target: arm-linux-gnueabi Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.5.2-8ubuntu4' --with-bugurl=file:///usr/share/doc/gcc-4.5/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.5 --enable-shared --enable-multiarch --with-multiarch-defaults=arm-linux-gnueabi --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib/arm-linux-gnueabi --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.5 --libdir=/usr/lib/arm-linux-gnueabi --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-plugin --enable-gold --enable-ld=default --with-plugin-ld=ld.gold --enable-objc-gc --disable-sjlj-exceptions --with-arch=armv7-a --with-float=softfp --with-fpu=vfpv3-d16 --with-mode=thumb --disable-werror --enable-checking=release --build=arm-linux-gnueabi --host=arm-linux-gnueabi --target=arm-linux-gnueabi Thread model: posix gcc version 4.5.2 (Ubuntu/Linaro 4.5.2-8ubuntu4) $ uname -a Linux panda0 2.6.38-1002-linaro-omap #3-Ubuntu SMP Fri Apr 15 14:00:54 UTC 2011 armv7l armv7l armv7l GNU/Linux Reporter: Trevor Robinson $ ant compile -Dcompile.native=true -Dcompile.c++=1 -Dlibhdfs=1 -Dfusedfs=1 ... create-libhdfs-configure: ... [exec] configure: error: Unsupported CPU architecture "armv7l" Once the CPU arch check is fixed in src/c++/libhdfs/m4/apsupport.m4, then next issue is -m32: $ ant compile -Dcompile.native=true -Dcompile.c++=1 -Dlibhdfs=1 -Dfusedfs=1 ... compile-c++-libhdfs: [exec] /bin/bash ./libtool --tag=CC --mode=compile gcc -DPACKAGE_NAME=\"libhdfs\" -DPACKAGE_TARNAME=\"libhdfs\" -DPACKAGE_VERSION=\"0.1.0\" -DPACKAGE_STRING=\"libhdfs\ 0.1.0\" -DPACKAGE_BUGREPORT=\"omal...@apache.org\" -DPACKAGE_URL=\"\" -DPACKAGE=\"libhdfs\" -DVERSION=\"0.1.0\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -Dsize_t=unsigned\ int -Dconst=/\*\*/ -Dvolatile=/\*\*/ -I. -I/home/trobinson/dev/hadoop-hdfs/src/c++/libhdfs -g -O2 -DOS_LINUX -DDSO_DLFCN -DCPU=\"arm\" -m32 -I/usr/lib/jvm/java-6-openjdk/include -I/usr/lib/jvm/java-6-openjdk/include/arm -Wall -Wstrict-prototypes -MT hdfs.lo -MD -MP -MF .deps/hdfs.Tpo -c -o hdfs.lo /home/trobinson/dev/hadoop-hdfs/src/c++/libhdfs/hdfs.c [exec] make: Warning: File `.deps/hdfs_write.Po' has modification time 2.1 s in the future [exec] libtool: compile: gcc -DPACKAGE_NAME=\"libhdfs\" -DPACKAGE_TARNAME=\"libhdfs\" -DPACKAGE_VERSION=\"0.1.0\" "-DPACKAGE_STRING=\"libhdfs 0.1.0\"" -DPACKAGE_BUGREPORT=\"omal...@apache.org\" -DPACKAGE_URL=\"\" -DPACKAGE=\"libhdfs\" -DVERSION=\"0.1.0\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" "-Dsize_t=unsigned int" "-Dconst=/**/" "-Dvolatile=/**/" -I. -I/home/trobinson/dev/hadoop-hdfs/src/c++/libhdfs -g -O2 -DOS_LINUX -DDSO_DLFCN -DCPU=\"arm\" -m32 -I/usr/lib/jvm/java-6-openjdk/include -I/usr/lib/jvm/java-6-openjdk/include/arm -Wall -Wstrict-prototypes -MT hdfs.lo -MD -MP -MF .deps/hdfs.Tpo -c /home/trobinson/dev/hadoop-hdfs/src/c++/libhdfs/hdfs.c -fPIC -DPIC -o .libs/hdfs.o [exec] cc1: error: unrecognized command line option "-m32" [exec] make: *** [hdfs.lo] Error 1 Here, gcc does not support -m32 for the ARM target, so -m${JVM_ARCH} must be omitted from CFLAGS and LDFLAGS. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Hadoop-Hdfs-trunk-Commit - Build # 639 - Still Failing
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/639/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 2876 lines...] [junit] Running org.apache.hadoop.hdfs.server.datanode.TestDiskError [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 8.256 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.249 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.721 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 16.16 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 27.9 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.622 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.175 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 11.339 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 3.985 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.866 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.084 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.724 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.687 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestPendingReplication [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.309 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.056 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.011 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 9.166 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.734 sec [junit] Running org.apache.hadoop.net.TestNetworkTopology [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.111 sec [junit] Running org.apache.hadoop.security.TestPermission [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.715 sec checkfailure: [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:706: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:663: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:731: Tests failed! Total time: 8 minutes 40 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Recording fingerprints Archiving artifacts Recording test results Publishing Javadoc Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 2 tests failed. FAILED: org.apache.hadoop.cli.TestHDFSCLI.testAll Error Message: One of the tests failed. See the Detailed results to identify the command that failed Stack Trace: junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264) at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126) at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81) FAILED: org.apache.hadoop.hdfs.TestDFSShell.testErrOutPut Error Message: -rm returned -1 Stack Trace: junit.fra
[jira] [Created] (HDFS-1921) Save namespace can cause NN to be unable to come up on restart
Save namespace can cause NN to be unable to come up on restart -- Key: HDFS-1921 URL: https://issues.apache.org/jira/browse/HDFS-1921 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.22.0, 0.23.0 Reporter: Aaron T. Myers Priority: Critical Fix For: 0.22.0, 0.23.0 I discovered this in the course of trying to implement a fix for HDFS-1505. Per the comment for {{FSImage.saveNamespace(...)}}, the algorithm for save namespace proceeds in the following order: # rename current to lastcheckpoint.tmp for all of them, # save image and recreate edits for all of them, # rename lastcheckpoint.tmp to previous.checkpoint. The problem is that step 3 occurs regardless of whether or not an error occurs for all storage directories in step 2. Upon restart, the NN will see non-existent or corrupt {{current}} directories, and no {{lastcheckpoint.tmp}} directories, and so will conclude that the storage directories are not formatted. This issue appears to be present on both 0.22 and 0.23. This should arguably be a 0.22/0.23 blocker. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Hadoop-Hdfs-22-branch - Build # 43 - Still Failing
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/43/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 3297 lines...] compile-hdfs-test: [delete] Deleting directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache run-test-hdfs-excluding-commit-and-smoke: [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf [junit] WARNING: multiple versions of ant detected in path for junit [junit] jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class [junit] and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class [junit] Running org.apache.hadoop.fs.TestFiListPath [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 2.095 sec [junit] Running org.apache.hadoop.fs.TestFiRename [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 5.179 sec [junit] Running org.apache.hadoop.hdfs.TestFiHFlush [junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 15.278 sec [junit] Running org.apache.hadoop.hdfs.TestFiHftp [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 35.333 sec [junit] Running org.apache.hadoop.hdfs.TestFiPipelines [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.257 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol [junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 209.148 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2 [junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 417.666 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.255 sec checkfailure: -run-test-hdfs-fault-inject-withtestcaseonly: run-test-hdfs-fault-inject: BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:745: Tests failed! Total time: 58 minutes 47 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Publishing Javadoc Archiving artifacts Recording test results Recording fingerprints Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 1 tests failed. FAILED: org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0 Error Message: 127.0.0.1:44880is not an underUtilized node Stack Trace: junit.framework.AssertionFailedError: 127.0.0.1:44880is not an underUtilized node at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1011) at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953) at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1496) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247) at org.apache.hadoop.hdfs.server.
Hadoop-Hdfs-trunk-Commit - Build # 640 - Still Failing
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/640/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 2771 lines...] [javac] required: org.apache.hadoop.security.token.Token [javac] (Token) tokenList.get(0)); [javac] ^ [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] 2 warnings [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache run-commit-test: [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/logs [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/extraconf [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/extraconf [junit] WARNING: multiple versions of ant detected in path for junit [junit] jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class [junit] and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class [junit] Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery [junit] Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec [junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED (timeout) [junit] Running org.apache.hadoop.hdfs.server.datanode.TestDataDirs [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.531 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.513 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestINodeFile [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.26 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestNNLeaseRecovery [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 3.574 sec checkfailure: [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:700: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:663: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:731: Tests failed! Total time: 15 minutes 30 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Recording fingerprints Archiving artifacts Recording test results Publishing Javadoc Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 1 tests failed. REGRESSION: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testErrorReplicas Error Messag
[jira] [Created] (HDFS-1922) Recurring failure in TestJMXGet.testNameNode since build 477 on May 11
Recurring failure in TestJMXGet.testNameNode since build 477 on May 11 -- Key: HDFS-1922 URL: https://issues.apache.org/jira/browse/HDFS-1922 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Matt Foley -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1923) Intermittent recurring failure in TestFiDataTransferProtocol2.pipeline_Fi_29
Intermittent recurring failure in TestFiDataTransferProtocol2.pipeline_Fi_29 Key: HDFS-1923 URL: https://issues.apache.org/jira/browse/HDFS-1923 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Matt Foley -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-1883) Recurring failures in TestBackupNode since HDFS-1052
[ https://issues.apache.org/jira/browse/HDFS-1883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Foley resolved HDFS-1883. -- Resolution: Fixed Fix Version/s: 0.23.0 We are now at build 488, and have had no further failures of TestBackupNode.testCheckpoint. I think it's fixed by HDFS-1891. > Recurring failures in TestBackupNode since HDFS-1052 > > > Key: HDFS-1883 > URL: https://issues.apache.org/jira/browse/HDFS-1883 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Matt Foley > Fix For: 0.23.0 > > -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Hadoop-Hdfs-trunk-Commit - Build # 641 - Still Failing
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/641/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 2880 lines...] [junit] Running org.apache.hadoop.hdfs.server.datanode.TestDiskError [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 8.584 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.775 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.719 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 17.435 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 28.767 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.607 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.165 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 12.186 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.589 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.057 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.084 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.799 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.813 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestPendingReplication [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.289 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.056 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.333 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 9.329 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 7.89 sec [junit] Running org.apache.hadoop.net.TestNetworkTopology [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.099 sec [junit] Running org.apache.hadoop.security.TestPermission [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.97 sec checkfailure: [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:706: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:663: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:731: Tests failed! Total time: 8 minutes 55 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Recording fingerprints Archiving artifacts Recording test results Publishing Javadoc Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 2 tests failed. FAILED: org.apache.hadoop.cli.TestHDFSCLI.testAll Error Message: One of the tests failed. See the Detailed results to identify the command that failed Stack Trace: junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264) at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126) at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81) FAILED: org.apache.hadoop.hdfs.TestDFSShell.testErrOutPut Error Message: -rm returned -1 Stack Trace: junit.fr
Question about hadoop namenode -format -clusterid
I'm at the hackathon in SF just trying to setup a single node cluster from my trunk checkout. I'm at the point where I need to format a new namenode, and the old way of just running "hadoop namenode -format" is failing because I'm not specifying a clusterID. So I started poking around the code to try and figure what is expected for the clusterID and I found that the namenode had a hidden option "-genclusterid" which causes the namenode to just print out a new clusterID and exit. I say hidden because if you run "hadoop namenode -usage" its not one of the listed options. What is the correct way to format a namenode now (in trunk) ? The current documentation doesn't match what the code does, so its unclear to me how this is supposed to work. IMHO "bin/namenode -format" should automaticlly generate a clusterID for you and it should exit with an Exception. This is what everybody has been trained to do. The only time you should have to specify a clusterID is when you want to add a namenode to an existing cluster. Doug
Re: Question about hadoop namenode -format -clusterid
Correct way to format a namenode : /bin/hdfs namenode -format -clusterid PS: Set your environment right like common home etc. Only first time it requires the cluster id, second time onwards it will remember cluster id and prompt you to format this particular cluster id. I have filed a Jira on this: https://issues.apache.org/jira/browse/HDFS-1905 -Bharath From: Doug Balog To: hdfs-dev@hadoop.apache.org Sent: Wednesday, May 11, 2011 8:03 PM Subject: Question about hadoop namenode -format -clusterid I'm at the hackathon in SF just trying to setup a single node cluster from my trunk checkout. I'm at the point where I need to format a new namenode, and the old way of just running "hadoop namenode -format" is failing because I'm not specifying a clusterID. So I started poking around the code to try and figure what is expected for the clusterID and I found that the namenode had a hidden option "-genclusterid" which causes the namenode to just print out a new clusterID and exit. I say hidden because if you run "hadoop namenode -usage" its not one of the listed options. What is the correct way to format a namenode now (in trunk) ? The current documentation doesn't match what the code does, so its unclear to me how this is supposed to work. IMHO "bin/namenode -format" should automaticlly generate a clusterID for you and it should exit with an Exception. This is what everybody has been trained to do. The only time you should have to specify a clusterID is when you want to add a namenode to an existing cluster. Doug
[jira] [Created] (HDFS-1924) Block information displayed in UI is incorrect
Block information displayed in UI is incorrect --- Key: HDFS-1924 URL: https://issues.apache.org/jira/browse/HDFS-1924 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.20-append Reporter: ramkrishna.s.vasudevan Priority: Minor Fix For: 0.20-append Problem statement Deleted blocks are not removed from the blockmap. Solution === Whenever delete is called the block entry must be removed from the block map and also move it to invalidates. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1925) SafeModeInfo should use DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_DEFAULT instead of 0.95
SafeModeInfo should use DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_DEFAULT instead of 0.95 --- Key: HDFS-1925 URL: https://issues.apache.org/jira/browse/HDFS-1925 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.22.0 Reporter: Konstantin Shvachko Fix For: 0.22.0 {{SafeMode()}} constructor has 0.95f default threshold hard-coded. This should be replaced by the constant {{DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_DEFAULT}}, which is correctly set to 0.999f. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira