sub

2011-05-18 Thread 肖之慰
---
肖之慰


Hadoop-Hdfs-trunk - Build # 670 - Still Failing

2011-05-18 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/670/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 816715 lines...]
[junit] 2011-05-18 12:41:49,978 INFO  ipc.Server (Server.java:run(698)) - 
Stopping IPC Server Responder
[junit] 2011-05-18 12:41:49,980 INFO  datanode.DataNode 
(DataNode.java:shutdown(1620)) - Waiting for threadgroup to exit, active 
threads is 0
[junit] 2011-05-18 12:41:49,981 WARN  datanode.DataNode 
(DataNode.java:offerService(1063)) - BPOfferService for block 
pool=BP-506275176-127.0.1.1-1305722508770 received 
exception:java.lang.InterruptedException
[junit] 2011-05-18 12:41:49,981 WARN  datanode.DataNode 
(DataNode.java:run(1216)) - DatanodeRegistration(127.0.0.1:42021, 
storageID=DS-1468868598-127.0.1.1-42021-1305722509254, infoPort=36475, 
ipcPort=47958, storageInfo=lv=-35;cid=testClusterID;nsid=867561895;c=0) ending 
block pool service for: BP-506275176-127.0.1.1-1305722508770
[junit] 2011-05-18 12:41:49,981 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:removeBlockPool(277)) - Removed 
bpid=BP-506275176-127.0.1.1-1305722508770 from blockPoolScannerMap
[junit] 2011-05-18 12:41:49,981 INFO  datanode.DataNode 
(FSDataset.java:shutdownBlockPool(2559)) - Removing block pool 
BP-506275176-127.0.1.1-1305722508770
[junit] 2011-05-18 12:41:49,981 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2011-05-18 12:41:49,981 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2011-05-18 12:41:49,982 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(1046)) - Shutting down DataNode 0
[junit] 2011-05-18 12:41:49,982 WARN  datanode.DirectoryScanner 
(DirectoryScanner.java:shutdown(298)) - DirectoryScanner: shutdown has been 
called
[junit] 2011-05-18 12:41:49,982 INFO  datanode.BlockPoolSliceScanner 
(BlockPoolSliceScanner.java:startNewPeriod(591)) - Starting a new period : work 
left in prev period : 100.00%
[junit] 2011-05-18 12:41:50,093 INFO  ipc.Server (Server.java:stop(1636)) - 
Stopping server on 52785
[junit] 2011-05-18 12:41:50,093 INFO  ipc.Server (Server.java:run(1471)) - 
IPC Server handler 0 on 52785: exiting
[junit] 2011-05-18 12:41:50,093 INFO  ipc.Server (Server.java:run(494)) - 
Stopping IPC Server listener on 52785
[junit] 2011-05-18 12:41:50,093 INFO  datanode.DataNode 
(DataNode.java:shutdown(1620)) - Waiting for threadgroup to exit, active 
threads is 1
[junit] 2011-05-18 12:41:50,094 WARN  datanode.DataNode 
(DataXceiverServer.java:run(143)) - 127.0.0.1:48961:DataXceiveServer: 
java.nio.channels.AsynchronousCloseException
[junit] at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
[junit] at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:136)
[junit] at java.lang.Thread.run(Thread.java:662)
[junit] 
[junit] 2011-05-18 12:41:50,093 INFO  ipc.Server (Server.java:run(698)) - 
Stopping IPC Server Responder
[junit] 2011-05-18 12:41:50,096 INFO  datanode.DataNode 
(DataNode.java:shutdown(1620)) - Waiting for threadgroup to exit, active 
threads is 0
[junit] 2011-05-18 12:41:50,096 WARN  datanode.DataNode 
(DataNode.java:offerService(1063)) - BPOfferService for block 
pool=BP-506275176-127.0.1.1-1305722508770 received 
exception:java.lang.InterruptedException
[junit] 2011-05-18 12:41:50,096 WARN  datanode.DataNode 
(DataNode.java:run(1216)) - DatanodeRegistration(127.0.0.1:48961, 
storageID=DS-231220868-127.0.1.1-48961-1305722509137, infoPort=55136, 
ipcPort=52785, storageInfo=lv=-35;cid=testClusterID;nsid=867561895;c=0) ending 
block pool service for: BP-506275176-127.0.1.1-1305722508770
[junit] 2011-05-18 12:41:50,196 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:removeBlockPool(277)) - Removed 
bpid=BP-506275176-127.0.1.1-1305722508770 from blockPoolScannerMap
[junit] 2011-05-18 12:41:50,196 INFO  datanode.DataNode 
(FSDataset.java:shutdownBlockPool(2559)) - Removing block pool 
BP-506275176-127.0.1.1-1305722508770
[junit] 2011-05-18 12:41:50,196 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2011-05-18 12:41:50,197 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2011-05-18 12:41:50,201 WARN  namenode.FSNamesystem 
(FS

[jira] [Reopened] (HDFS-1922) Recurring failure in TestJMXGet.testNameNode since build 477 on May 11

2011-05-18 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HDFS-1922:
--


HDFS-1117 was committed but TestJMXGet is still failing; see [build 
#549|https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/549//testReport/].

> Recurring failure in TestJMXGet.testNameNode since build 477 on May 11
> --
>
> Key: HDFS-1922
> URL: https://issues.apache.org/jira/browse/HDFS-1922
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Matt Foley
> Fix For: 0.23.0
>
>


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Review Request: Misc improvements to HDFS HTML

2011-05-18 Thread Todd Lipcon

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/754/
---

Review request for hadoop-hdfs, Todd Lipcon and Eugene Koontz.


Summary
---

Uploading Eugene's patch 
https://issues.apache.org/jira/secure/attachment/12479631/HDFS-1013.patch for 
easier review


This addresses bug HDFS-1013.
https://issues.apache.org/jira/browse/HDFS-1013


Diffs
-

  trunk/src/java/org/apache/hadoop/hdfs/server/common/JspHelper.java 1124363 
  trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java 
1124363 
  trunk/src/webapps/hdfs/dfshealth.jsp 1124363 
  trunk/src/webapps/static/hadoop.css PRE-CREATION 

Diff: https://reviews.apache.org/r/754/diff


Testing
---


Thanks,

Todd



Re: Review Request: Misc improvements to HDFS HTML

2011-05-18 Thread Todd Lipcon

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/754/#review682
---


a few preliminary comments. I also want to load this up and see how it looks :)


trunk/src/java/org/apache/hadoop/hdfs/server/common/JspHelper.java


these lines are missing  and  still



trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java


missing  on this line and lines below



trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java


do we need warning class on both the div and the a?



trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java


can we kill off these table attributes and use CSS styles to style all the 
tables consistently?


- Todd


On 2011-05-18 19:17:02, Todd Lipcon wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/754/
> ---
> 
> (Updated 2011-05-18 19:17:02)
> 
> 
> Review request for hadoop-hdfs, Todd Lipcon and Eugene Koontz.
> 
> 
> Summary
> ---
> 
> Uploading Eugene's patch 
> https://issues.apache.org/jira/secure/attachment/12479631/HDFS-1013.patch for 
> easier review
> 
> 
> This addresses bug HDFS-1013.
> https://issues.apache.org/jira/browse/HDFS-1013
> 
> 
> Diffs
> -
> 
>   trunk/src/java/org/apache/hadoop/hdfs/server/common/JspHelper.java 1124363 
>   
> trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java 
> 1124363 
>   trunk/src/webapps/hdfs/dfshealth.jsp 1124363 
>   trunk/src/webapps/static/hadoop.css PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/754/diff
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Todd
> 
>



[jira] [Created] (HDFS-1955) CLONE - saveNamespace appears to succeed even if all directories fail to save

2011-05-18 Thread Matt Foley (JIRA)
CLONE - saveNamespace appears to succeed even if all directories fail to save
-

 Key: HDFS-1955
 URL: https://issues.apache.org/jira/browse/HDFS-1955
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0, 0.23.0
Reporter: Matt Foley
Assignee: Aaron T. Myers
Priority: Blocker
 Fix For: 0.22.0


After HDFS-1071, saveNamespace now appears to "succeed" even if all of the 
individual directories failed to save.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1956) HDFS federation configuration should be documented

2011-05-18 Thread Ari Rabkin (JIRA)
HDFS federation configuration should be documented
--

 Key: HDFS-1956
 URL: https://issues.apache.org/jira/browse/HDFS-1956
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0
Reporter: Ari Rabkin
Assignee: Jitendra Nath Pandey


HDFS-1689 didn't document any of the new configuration options it introduced. 
These should be in a "Federation user guide", or at the very least in Javadoc.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1957) Documentation for HFTP

2011-05-18 Thread Ari Rabkin (JIRA)
Documentation for HFTP
--

 Key: HDFS-1957
 URL: https://issues.apache.org/jira/browse/HDFS-1957
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.0
Reporter: Ari Rabkin
Assignee: Ari Rabkin
Priority: Minor
 Fix For: 0.23.0


There should be some documentation for HFTP.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1958) Format confirmation prompt should be more lenient of its input

2011-05-18 Thread Todd Lipcon (JIRA)
Format confirmation prompt should be more lenient of its input
--

 Key: HDFS-1958
 URL: https://issues.apache.org/jira/browse/HDFS-1958
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.22.0


As reported on the mailing list, the namenode format prompt only accepts 'Y'. 
We should also accept 'y' and 'yes' (non-case-sensitive).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-trunk-Commit - Build # 668 - Failure

2011-05-18 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/668/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 2521 lines...]
[javac] ^
[javac] 
/tmp/clover5444020297384197790.tmp/org/apache/hadoop/fs/TestResolveHdfsSymlink.java:120:
 warning: [unchecked] unchecked cast
[javac] found   : org.apache.hadoop.security.token.Token
[javac] required: org.apache.hadoop.security.token.Token
[javac] (Token) 
tokenList.get(0));
[javac] 
  ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 2 warnings
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/cache

run-commit-test:
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/logs
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/extraconf
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/extraconf
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
[junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
[junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery 
FAILED (timeout)
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestDataDirs
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.557 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.452 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestINodeFile
[junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.264 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestNNLeaseRecovery
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 3.781 sec

checkfailure:
[touch] Creating 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:703:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:666:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:734:
 Tests failed!

Total time: 15 minutes 30 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testErrorReplicas

Error Message:
Timeout occurred. Please note the time in the report does not reflect the 

[jira] [Created] (HDFS-1959) Better error message for missing namenode directory

2011-05-18 Thread Eli Collins (JIRA)
Better error message for missing namenode directory
---

 Key: HDFS-1959
 URL: https://issues.apache.org/jira/browse/HDFS-1959
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 0.23.0
 Attachments: hdfs-1959-1.patch

Starting the namenode with a missing NN directory currently results in two 
stack traces, "Expecting a line not the end of stream" from DF and an NPE. 
Let's make this more user-friendly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1960) dfs.*.dir should not default to /tmp (or other typically volatile storage)

2011-05-18 Thread philo vivero (JIRA)
dfs.*.dir should not default to /tmp (or other typically volatile storage)
--

 Key: HDFS-1960
 URL: https://issues.apache.org/jira/browse/HDFS-1960
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.20.2
 Environment: *nix systems
Reporter: philo vivero
Priority: Critical


The hdfs-site.xml file possibly will not define one or both of:

dfs.name.dir
dfs.data.dir

If they are not specified, data is stored in /tmp. This is extremely dangerous. 
Rationale: the cluster will work fine for days, possibly even weeks, before 
blocks will start to go missing. Rebooting a datanode on common Linux systems 
will clear all the data from that node. There is no documented way (that I'm 
aware of) to recover the situation. The cluster must be completely obliterated 
and rebuilt from scratch.

Better reactions to the missing configuration parameters:

1. DataNode dies on startup and asks that these parameters be defined.
2. Default is /var/db/hadoop (or some other non-volatile storage location). 
Naturally, inability to write into that directory leads to DataNode to die on 
startup, logging error.

The first solution would be most likely preferred by typical enterprise 
sysadmins. The second solution is suboptimal (since /var/db/hadoop might not be 
the optimal location for the data) but is still preferable to the current 
implementation, since it will less often lead to an irretrievably corrupt 
cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1961) New architectural documentation created

2011-05-18 Thread Rick Kazman (JIRA)
New architectural documentation created
---

 Key: HDFS-1961
 URL: https://issues.apache.org/jira/browse/HDFS-1961
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.21.0
Reporter: Rick Kazman
 Fix For: 0.21.0


This material provides an overview of the HDFS architecture and is intended for 
contributors. The goal of this document is to provide a guide to the overall 
structure of the HDFS code so that contributors can more effectively understand 
how changes that they are considering can be made, and the consequences of 
those changes. The assumption is that the reader has a basic understanding of 
HDFS, its purpose, and how it fits into the Hadoop project suite. 

An HTML version of the architectural documentation can be found at:  
http://kazman.shidler.hawaii.edu/ArchDoc.html

All comments and suggestions for improvements are appreciated.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira