[jira] [Created] (HDFS-7575) NameNode not handling heartbeats properly after HDFS-2832

2014-12-30 Thread Lars Francke (JIRA)
Lars Francke created HDFS-7575:
--

 Summary: NameNode not handling heartbeats properly after HDFS-2832
 Key: HDFS-7575
 URL: https://issues.apache.org/jira/browse/HDFS-7575
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Lars Francke


Before HDFS-2832 each DataNode would have a unique storageId which included its 
IP address. Since HDFS-2832 the DataNodes have a unique storageId per storage 
directory which is just a random UUID.

They send reports per storage directory in their heartbeats. This heartbeat is 
processed on the NameNode in the {{DatanodeDescriptor#updateHeartbeatState}} 
method. Pre HDFS-2832 this would just store the information per Datanode. After 
the patch though each DataNode can have multiple different storages so it's 
stored in a map keyed by the storage Id.

This works fine for all clusters that have been installed post HDFS-2832 as 
they get a UUID for their storage Id. So a DN with 8 drives has a map with 8 
different keys. On each Heartbeat the Map is searched and updated 
({{DatanodeStorageInfo storage = storageMap.get(s.getStorageID());}}):

{code:title=DatanodeStorageInfo}
  void updateState(StorageReport r) {
capacity = r.getCapacity();
dfsUsed = r.getDfsUsed();
remaining = r.getRemaining();
blockPoolUsed = r.getBlockPoolUsed();
  }
{code}

On clusters that were upgraded from a pre HDFS-2832 version though the storage 
Id has not been rewritten (at least not on the four clusters I checked) so each 
directory will have the exact same storageId. That means there'll be only a 
single entry in the {{storageMap}} and it'll be overwritten by a random 
{{StorageReport}} from the DataNode. This can be seen in the {{updateState}} 
method above. This just assigns the capacity from the received report, instead 
it should probably sum it up per received heartbeat.

The Balancer seems to be one of the only things that actually uses this 
information so it now considers the utilization of a random drive per DataNode 
for balancing purposes.

Things get even worse when a drive has been added or replaced as this will now 
get a new storage Id so there'll be two entries in the storageMap. As new 
drives are usually empty it skewes the balancers decision in a way that this 
node will never be considered over-utilized.

Another problem is that old StorageReports are never removed from the 
storageMap. So if I replace a drive and it gets a new storage Id the old one 
will still be in place and used for all calculations by the Balancer until a 
restart of the NameNode.

I can try providing a patch that does the following:

* Instead of using a Map I could just store the array we receive or instead of 
storing an array sum up the values for reports with the same Id
* On each heartbeat clear the map (so we know we have up to date information)

Does that sound sensible?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 1989 - Still Failing

2014-12-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1989/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7193 lines...]
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:53 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  2.186 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:53 h
[INFO] Finished at: 2014-12-30T14:27:46+00:00
[INFO] Final Memory: 53M/787M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-11039
Updating YARN-2938
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode

Error Message:
expected:<0> but was:<-3>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<-3>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:806)


FAILED:  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect

Error Message:
The map of version counts returned by DatanodeManager was not what it was 
expected to be on iteration 403 expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: The map of version counts returned by DatanodeManager 
was not what it was expected to be on iteration 403 expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150)


REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testF

Build failed in Jenkins: Hadoop-Hdfs-trunk #1989

2014-12-30 Thread Apache Jenkins Server
See 

Changes:

[zjshen] YARN-2938. Fixed new findbugs warnings in hadoop-yarn-resourcemanager 
and hadoop-yarn-applicationhistoryservice. Contributed by Varun Saxena.

[cmccabe] HADOOP-11039. ByteBufferReadable API doc is inconsistent with the 
implementations. (Yi Liu via Colin P. McCabe)

--
[...truncated 7000 lines...]
Running org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.861 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.388 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.374 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.34 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
Running org.apache.hadoop.hdfs.server.balancer.TestBalancer
Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 247.375 sec 
<<< FAILURE! - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
testUnknownDatanode(org.apache.hadoop.hdfs.server.balancer.TestBalancer)  Time 
elapsed: 43.739 sec  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<-3>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:806)

Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.196 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.206 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Running org.apache.hadoop.hdfs.server.mover.TestStorageMover
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 182.955 sec - 
in org.apache.hadoop.hdfs.server.mover.TestStorageMover
Running org.apache.hadoop.hdfs.server.mover.TestMover
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 129.786 sec - 
in org.apache.hadoop.hdfs.server.mover.TestMover
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.546 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.901 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.87 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.392 sec - in 
org.apache.hadoop.hdfs.TestPipelines
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.493 sec - in 
org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.392 sec - in 
org.apache.hadoop.hdfs.TestLeaseRenewer
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.974 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.795 sec - in 
org.apache.hadoop.hdfs.TestWriteRead
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.052 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.215 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.085 sec - in 
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #54

2014-12-30 Thread Apache Jenkins Server
See 

Changes:

[zjshen] YARN-2938. Fixed new findbugs warnings in hadoop-yarn-resourcemanager 
and hadoop-yarn-applicationhistoryservice. Contributed by Varun Saxena.

[cmccabe] HADOOP-11039. ByteBufferReadable API doc is inconsistent with the 
implementations. (Yi Liu via Colin P. McCabe)

--
[...truncated 8158 lines...]
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:744)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"nioEventLoopGroup-19-7"  prio=10 tid=1150 runnable
java.lang.Thread.State: RUNNABLE
Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 524.31 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestDecommission
testIncludeByRegistrationName(org.apache.hadoop.hdfs.TestDecommission)  Time 
elapsed: 360.298 sec  <<< ERROR!
java.lang.Exception: test timed out after 36 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName(TestDecommission.java:957)

at sun.ni
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 106.667 sec - 
in org.apache.hadoop.hdfs.TestDFSUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.144 sec - in 
org.apache.hadoop.hdfs.TestGetBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.302 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.221 sec - in 
org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.399 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.645 sec - in 
org.apache.hadoop.hdfs.util.TestByteArrayManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.068 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.26 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.207 sec - i

Hadoop-Hdfs-trunk-Java8 - Build # 54 - Still Failing

2014-12-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/54/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8351 lines...]
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, 
no dependency information available
[WARNING] Failed to retrieve plugin descriptor for 
org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin 
org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be 
resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in 
http://repo.maven.apache.org/maven2 was cached in the local repository, 
resolution will not be reattempted until the update interval of central has 
elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  03:11 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  1.624 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:11 h
[INFO] Finished at: 2014-12-30T14:45:51+00:00
[INFO] Final Memory: 51M/229M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-11039
Updating YARN-2938
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
2 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName

Error Message:
test timed out after 36 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 36 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName(TestDecommission.java:957)


REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization

Error Message:
test timed out after 3 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 3 milliseconds
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer

[jira] [Created] (HDFS-7576) TestPipelinesFailover#testFailoverRightBeforeCommitSynchronization sometimes fails in Java 8 build

2014-12-30 Thread Ted Yu (JIRA)
Ted Yu created HDFS-7576:


 Summary: 
TestPipelinesFailover#testFailoverRightBeforeCommitSynchronization sometimes 
fails in Java 8 build
 Key: HDFS-7576
 URL: https://issues.apache.org/jira/browse/HDFS-7576
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


>From https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/54/ :
{code}
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization

Error Message:
test timed out after 3 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 3 milliseconds
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at 
org.apache.hadoop.test.GenericTestUtils$DelayAnswer.waitForCall(GenericTestUtils.java:226)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization(TestPipelinesFailover.java:386)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7577) Add additional headers that includes need by Windows

2014-12-30 Thread Thanh Do (JIRA)
Thanh Do created HDFS-7577:
--

 Summary: Add additional headers that includes need by Windows
 Key: HDFS-7577
 URL: https://issues.apache.org/jira/browse/HDFS-7577
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Thanh Do
Assignee: Thanh Do


This jira involves adding a list of (mostly dummy) headers that available in 
POSIX systems, but not in Windows. One step towards making libhdfs3 built in 
Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7578) NFS WRITE and COMMIT responses should always use the channel pipeline

2014-12-30 Thread Brandon Li (JIRA)
Brandon Li created HDFS-7578:


 Summary: NFS WRITE and COMMIT responses should always use the 
channel pipeline
 Key: HDFS-7578
 URL: https://issues.apache.org/jira/browse/HDFS-7578
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7578.001.patch

Write and Commit responses directly write data to the channel instead of push 
it to the process pipeline. This could cause the NFS handler thread be blocked 
waiting for the response to be flushed to the network before it can return to 
serve a different request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)