Re: hdfsproxy tests
I had a quick look before. It's trying to download a file that doesn't exist. I think I fixed that, but it was masking another problem, so I gave up at that stage. -Ivan On 4 Apr 2011, at 19:14, Todd Lipcon wrote: > Hi all, > > The hdfsproxy TestAuthorizationFilter test has been failing now for quite a > long time. Can anyone familiar with this code step up and fix it? There s a > JIRA at HDFS-1666. > > If no one volunteers I will take a quick look, but if I can't figure it out, > I would like to propose that we disable the hdfsproxy tests and consider it > abandoned. > > -Todd > -- > Todd Lipcon > Software Engineer, Cloudera
Hadoop-Hdfs-trunk - Build # 628 - Still Failing
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/628/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 739272 lines...] [junit] ... 11 more [junit] 2011-04-05 12:22:42,478 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-04-05 12:22:42,478 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread. [junit] 2011-04-05 12:22:42,479 INFO datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:60473, storageID=DS-2030027606-127.0.1.1-60473-1302006152029, infoPort=37469, ipcPort=48812):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'} [junit] 2011-04-05 12:22:42,479 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 48812 [junit] 2011-04-05 12:22:42,479 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-04-05 12:22:42,479 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-04-05 12:22:42,480 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-04-05 12:22:42,480 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down. [junit] 2011-04-05 12:22:42,480 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0 [junit] 2011-04-05 12:22:42,582 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 46639 [junit] 2011-04-05 12:22:42,582 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 46639: exiting [junit] 2011-04-05 12:22:42,583 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 46639 [junit] 2011-04-05 12:22:42,583 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder [junit] 2011-04-05 12:22:42,583 WARN datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:48393, storageID=DS-1552435921-127.0.1.1-48393-1302006151861, infoPort=3, ipcPort=46639):DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) [junit] at java.lang.Thread.run(Thread.java:662) [junit] [junit] 2011-04-05 12:22:42,584 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-04-05 12:22:42,684 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread. [junit] 2011-04-05 12:22:42,685 INFO datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:48393, storageID=DS-1552435921-127.0.1.1-48393-1302006151861, infoPort=3, ipcPort=46639):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'} [junit] 2011-04-05 12:22:42,685 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 46639 [junit] 2011-04-05 12:22:42,685 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-04-05 12:22:42,685 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-04-05 12:22:42,685 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-04-05 12:22:42,686 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down. [junit] 2011-04-05 12:22:42,788 WARN namenode.FSNamesystem (FSNamesystem.java:run(2857)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted [junit] 2011-04-05 12:22:42,788 WARN namenode.Decom
[jira] [Created] (HDFS-1805) Some Tests in TestDFSShell can not shutdown the MiniDFSCluster on any exception/assertion failure. This will leads to fail other testcases.
Some Tests in TestDFSShell can not shutdown the MiniDFSCluster on any exception/assertion failure. This will leads to fail other testcases. --- Key: HDFS-1805 URL: https://issues.apache.org/jira/browse/HDFS-1805 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G Priority: Minor Some test cases in TestDFSShell are not shutting down the MiniDFSCluster in finally. If any test assertion failure or exception can result in not shutting down this cluster. Because of this other testcases will fail. This will create difficulty in finding the actual testcase failures. So, better to shutdown the cluster in finally. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1806) TestBlockReport.blockReport_08() and _09() are timing-dependent and likely to fail on fast servers
TestBlockReport.blockReport_08() and _09() are timing-dependent and likely to fail on fast servers -- Key: HDFS-1806 URL: https://issues.apache.org/jira/browse/HDFS-1806 Project: Hadoop HDFS Issue Type: Bug Components: data-node, name-node Affects Versions: 0.22.0 Reporter: Matt Foley Method waitForTempReplica() polls every 100ms during block replication, attempting to "catch" a datanode in the state of having a TEMPORARY replica. But examination of a current Hudson test failure log shows that the replica goes from "start" to "TEMPORARY" to "FINALIZED" in only 50ms, so of course the poll usually misses it. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1807) TestCLI xml config should be able to recognize real user names which might include digits in the name
TestCLI xml config should be able to recognize real user names which might include digits in the name - Key: HDFS-1807 URL: https://issues.apache.org/jira/browse/HDFS-1807 Project: Hadoop HDFS Issue Type: Bug Reporter: Konstantin Boudnik Running TestCLI in a real cluster environment I came across a problem where a regexp like this {{^-rw-r--r--( )*1( )*[a-z]*( )*supergroup( )*.*}} doesn't match {{-rw-r--r-- 1 testuser1 supergroup 0 2011-04-05 13:21 /tmp/testcli/file1}} It turns out that {{[a-z]*}} doesn't match username {{testuser1}} It'd be nice to have a regexp which works (esp. in light of HDFS-1762) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1808) TestBalancer waits forever, errs without giving information
TestBalancer waits forever, errs without giving information --- Key: HDFS-1808 URL: https://issues.apache.org/jira/browse/HDFS-1808 Project: Hadoop HDFS Issue Type: Bug Components: data-node, name-node Affects Versions: 0.22.0 Reporter: Matt Foley Assignee: Matt Foley In three locations in the code, TestBalancer waits forever on a condition. Failures result in Hudson/Jenkins "Timeout occurred" error message with no information about where or why. Need to replace with TimeoutExceptions that throw a stack trace and useful info about the failure mode. In waitForHeartBeat(), it is waiting on an exact match for a value that may be coarsely quantized -- i.e., significant deviation from the exact "expected" result may occur. Replace with an allowed range of result. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1809) Lack of proper DefaultMetricsSystem initialization breaks some tests
Lack of proper DefaultMetricsSystem initialization breaks some tests Key: HDFS-1809 URL: https://issues.apache.org/jira/browse/HDFS-1809 Project: Hadoop HDFS Issue Type: Bug Reporter: Owen O'Malley Assignee: Suresh Srinivas Fix For: 0.20.203.0 Following tests are failing: TestHDFSServerPorts TestNNLeaseRecovery TestSaveNamespace -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1810) Remove duplicate jar entries from common
Remove duplicate jar entries from common Key: HDFS-1810 URL: https://issues.apache.org/jira/browse/HDFS-1810 Project: Hadoop HDFS Issue Type: Improvement Reporter: Owen O'Malley Assignee: Luke Lu Remove the jars that we get from common from our direct dependency list. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1811) Create scripts to decommission datanodes
Create scripts to decommission datanodes Key: HDFS-1811 URL: https://issues.apache.org/jira/browse/HDFS-1811 Project: Hadoop HDFS Issue Type: Improvement Reporter: Owen O'Malley Assignee: Erik Steffl Create scripts to decommission datanodes: - distribute exclude file - input is location of exclude file - location on namenodes: hdfs getconf -excludeFile - list of namenodes: hdfs getconf -namenodes - scp excludes files to all namenodes - refresh namenodes - list of namenodes: hdfs getconf -namenodes - refresh namenodes: hdfs dfsadmin -refreshNodes Two scripts are needed because each of them might require different permissions. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira