Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #65

2015-01-09 Thread Apache Jenkins Server
See 

Changes:

[zjshen] YARN-2996. Improved synchronization and I/O operations of FS- and Mem- 
RMStateStore. Contributed by Yi Liu.

[benoy] HADOOP-10651. Add ability to restrict service access using IP addresses 
and hostnames. (Benoy Antony)

[cnauroth] HDFS-7589. Break the dependency between libnative_mini_dfs and 
libhdfs. Contributed by Zhanwei Wang.

[jianhe] YARN-2997. Fixed NodeStatusUpdater to not send alreay-sent completed 
container statuses on heartbeat. Contributed by Chengbing Liu

[cnauroth] HDFS-7579. Improve log reporting during block report rpc failure. 
Contributed by Charles Lamb.

[cmccabe] HADOOP-11470. Remove some uses of obsolete guava APIs from the hadoop 
codebase (Sangjin Lee via Colin P. McCabe)

--
[...truncated 7186 lines...]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 526.945 sec 
<<< FAILURE! - in org.apache.hadoop.hdfs.TestDecommission
testIncludeByRegistrationName(org.apache.hadoop.hdfs.TestDecommission)  Time 
elapsed: 360.328 sec  <<< ERROR!
java.lang.Exception: test timed out after 36 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName(TestDecommission.java:957)

at java.util.concurrent.ThreadPoolExecutor.runWorker(Thr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.622 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.231 sec - in 
org.apache.hadoop.hdfs.TestGetBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.02 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec - in 
org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.372 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.889 sec - in 
org.apache.hadoop.hdfs.util.TestByteArrayManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.185 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.254 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring o

Hadoop-Hdfs-trunk-Java8 - Build # 65 - Failure

2015-01-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/65/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7379 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:53 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  1.624 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-01-09T14:27:49+00:00
[INFO] Final Memory: 47M/232M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-7579
Updating HDFS-7589
Updating HADOOP-11470
Updating HADOOP-10651
Updating YARN-2996
Updating YARN-2997
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName

Error Message:
test timed out after 36 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 36 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName(TestDecommission.java:957)




[jira] [Resolved] (HDFS-7574) Make cmake work in Windows Visual Studio 2010

2015-01-09 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-7574.

  Resolution: Fixed
   Fix Version/s: HDFS-6994
Target Version/s: HDFS-6994

> Make cmake work in Windows Visual Studio 2010
> -
>
> Key: HDFS-7574
> URL: https://issues.apache.org/jira/browse/HDFS-7574
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
> Environment: Windows Visual Studio 2010
>Reporter: Thanh Do
>Assignee: Thanh Do
> Fix For: HDFS-6994
>
> Attachments: HDFS-7574-branch-HDFS-6994-1.patch, 
> HDFS-7574-branch-HDFS-6994-2.patch
>
>
> Cmake should be able to generate a solution file in Windows Visual Studio 
> 2010. This is the first step in a series of steps making libhdfs3 built 
> successfully in Windows. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7597) Clients seeking over webhdfs may crash the NN

2015-01-09 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-7597:
-

 Summary: Clients seeking over webhdfs may crash the NN
 Key: HDFS-7597
 URL: https://issues.apache.org/jira/browse/HDFS-7597
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 2.0.0-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical


Webhdfs seeks involve closing the current connection, and reissuing a new open 
request with the new offset.  The RPC layer caches connections so the DN keeps 
a lingering connection open to the NN.  Connection caching is in part based on 
UGI.  Although the client used the same token for the new offset request, the 
UGI is different which forces the DN to open another unnecessary connection to 
the NN.

A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7598) TestDFSClientCache.testEviction is not quite correct and fails with newer version of guava

2015-01-09 Thread Sangjin Lee (JIRA)
Sangjin Lee created HDFS-7598:
-

 Summary: TestDFSClientCache.testEviction is not quite correct and 
fails with newer version of guava
 Key: HDFS-7598
 URL: https://issues.apache.org/jira/browse/HDFS-7598
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor


TestDFSClientCache.testEviction() is not entirely accurate in its usage of the 
guava LoadingCache.

It sets the max size at 2, but asserts the loading cache will contain only 1 
entry after inserting two entries. Guava's CacheBuilder.maximumSize() makes 
only the following promise:

{panel}
Specifies the maximum number of entries the cache may contain. Note that the 
cache may evict an entry before this limit is exceeded.
{panel}

Thus, the only invariant is that the loading cache will hold the maximum size 
number of entries or fewer. The DFSClientCache.testEviction asserts it holds 
maximum size - 1 exactly.

For guava 11.0.2 this happens to be true at maximum size = 2 because of the way 
it sets the maximum segment weight. With later versions of guava, the maximum 
segment weight is set higher, and the eviction is less aggressive.

The test should be fixed to assert only the true invariant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Understanding Network Utilization of TeraSort

2015-01-09 Thread Eitan Rosenfeld
My goal is to see how the performance and network utilization of TeraSort
is affected by varying the replication factor from 1-3 on my 16-node
cluster. (I have modified TeraSort such that it uses my system's
replication factor.) I am sorting 100GB.

In particular, I am confused by the network utilization. With 1 replica,
the network utilization is under 1GB. With 2 replicas, it is about 117GB.
And with 3 replicas, it is about 225-230GB.

I understand that just replicating the 100GB of sorted data causes 100GB
and 200GB of network traffic in the 2 and 3 replica configurations,
respectively. However, what accounts for the extra 17GB and 25-30GB in the
2 and 3 replica configs? And what accounts for the minimal network usage in
the 1 replica configuration?

Note that the data is generated with TeraGen using the same replication
factor with which it is later sorted.

Thank you,
Eitan Rosenfeld