Build failed in Jenkins: Hadoop-Hdfs-trunk #2201

2015-08-01 Thread Apache Jenkins Server
See 

Changes:

[jlowe] YARN-3990. AsyncDispatcher may overloaded with RMAppNodeUpdateEvent 
when Node is connected/disconnected. Contributed by Bibin A Chundatt

[jlowe] MAPREDUCE-6394. Speed up Task processing loop in HsTasksBlock#render(). 
Contributed by Ray Chiang

[aw] HADOOP-12249. pull argument parsing into a function (aw)

[aw] HADOOP-10854. unit tests for the shell scripts (aw)

[cmccabe] HADOOP-7824. NativeIO.java flags and identifiers must be set 
correctly for each platform, not hardcoded to their Linux values (Martin Walsh 
via Colin P. McCabe)

[cmccabe] HADOOP-12183. Annotate the HTrace span created by FsShell with the 
command-line arguments passed by the user (Masatake Iwasaki via Colin P.  
McCabe)

[xyao] HDFS-6860. BlockStateChange logs are too noisy. Contributed by Chang Li 
and Xiaoyu Yao.

[zxu] HADOOP-12268. AbstractContractAppendTest#testRenameFileBeingAppended 
misses rename operation. Contributed by Zhihai Xu

[zxu] HDFS-8847. change TestHDFSContractAppend to not override 
testRenameFileBeingAppended method. Contributed by Zhihai Xu

--
[...truncated 9462 lines...]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"DataNode: 
[[[DISK]
 
[DISK]
  heartbeating to localhost/127.0.0.1:33264" daemon prio=5 tid=64 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:725)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846)
at java.lang.Thread.run(Thread.java:745)
"IPC Server idle connection scanner for port 35519" daemon prio=5 tid=87 
timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Object.wait(Native Method)
at java.util.TimerThread.mainLoop(Timer.java:552)
at java.util.TimerThread.run(Timer.java:505)
"175729056@qtp-368403943-0" daemon prio=5 tid=106 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Object.wait(Native Method)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"DecommissionMonitor-0" daemon prio=5 tid=33 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 45920" daemon prio=5 tid=67 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at 
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:125)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2139)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@7db6306b" daemon 
prio=5 tid=44 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:336)
at java.lan

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #263

2015-08-01 Thread Apache Jenkins Server
See 

Changes:

[jlowe] YARN-3990. AsyncDispatcher may overloaded with RMAppNodeUpdateEvent 
when Node is connected/disconnected. Contributed by Bibin A Chundatt

[jlowe] MAPREDUCE-6394. Speed up Task processing loop in HsTasksBlock#render(). 
Contributed by Ray Chiang

[aw] HADOOP-12249. pull argument parsing into a function (aw)

[aw] HADOOP-10854. unit tests for the shell scripts (aw)

[cmccabe] HADOOP-7824. NativeIO.java flags and identifiers must be set 
correctly for each platform, not hardcoded to their Linux values (Martin Walsh 
via Colin P. McCabe)

[cmccabe] HADOOP-12183. Annotate the HTrace span created by FsShell with the 
command-line arguments passed by the user (Masatake Iwasaki via Colin P.  
McCabe)

[xyao] HDFS-6860. BlockStateChange logs are too noisy. Contributed by Chang Li 
and Xiaoyu Yao.

[zxu] HADOOP-12268. AbstractContractAppendTest#testRenameFileBeingAppended 
misses rename operation. Contributed by Zhihai Xu

[zxu] HDFS-8847. change TestHDFSContractAppend to not override 
testRenameFileBeingAppended method. Contributed by Zhihai Xu

--
[...truncated 7858 lines...]
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.227 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.09 sec - in 
org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.319 sec - in 
org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.65 sec - in 
org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.575 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.618 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.064 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.813 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.805 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.838 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.755 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.388 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Erro

Hadoop-Hdfs-trunk-Java8 - Build # 263 - Still Failing

2015-08-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/263/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8051 lines...]
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:03 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:49 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.091 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-08-01T14:28:02+00:00
[INFO] Final Memory: 53M/768M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4319627 bytes
Compression is 0.0%
Took 8 sec
Recording test results
Updating YARN-3990
Updating HDFS-8847
Updating HADOOP-12268
Updating HADOOP-7824
Updating MAPREDUCE-6394
Updating HADOOP-10854
Updating HDFS-6860
Updating HADOOP-12183
Updating HADOOP-12249
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
4 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST

Error Message:
dir has ERROR

Stack Trace:
java.lang.IllegalStateException: dir has ERROR
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.checkErrorState(TestAppendSnapshotTruncate.java:430)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.stop(TestAppendSnapshotTruncate.java:483)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST(TestAppendSnapshotTruncate.java:128)
Caused by: java.lang.IllegalStateException: null
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:129)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.pause(TestAppendSnapshotTruncate.java:479)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.pauseAllFiles(TestAppendSnapshotTruncate.java:247)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:220)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:140)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker$1.run(TestAppendSnapshotTruncate.java:454)
at java.lang.Thread.run(Thread.java:744)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend(TestDataNodeMetrics.java:288)


FAILED:  
org.apache.hadoop.hdfs.serv

[jira] [Reopened] (HDFS-8840) Inconsistent log level practice

2015-08-01 Thread songwanging (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

songwanging reopened HDFS-8840:
---

we can use "LOG.isFatalEnabled()" instead of "LOG.isDebugEnabled()" here, this 
is not a good practice of log.

> Inconsistent log level practice
> ---
>
> Key: HDFS-8840
> URL: https://issues.apache.org/jira/browse/HDFS-8840
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0, 2.5.1, 2.5.2, 2.7.1
>Reporter: songwanging
>Assignee: Jagadesh Kiran N
>Priority: Minor
> Attachments: HDFS-8840-00.patch
>
>
> In method "checkLogsAvailableForRead()" of class: 
> hadoop-2.7.1-src\hadoop-hdfs-project\hadoop-hdfs\src\main\java\org\apache\hadoop\hdfs\server\namenode\ha\BootstrapStandby.java
> The log level is not correct, after checking "LOG.isDebugEnabled()", we 
> should use "LOG.debug(msg, e);", while now we use " LOG.fatal(msg, e);". Log 
> level is inconsistent.
> the source code of this method is:
> private boolean checkLogsAvailableForRead(FSImage image, long imageTxId, long 
> curTxIdOnOtherNode) {
>   ...
> } catch (IOException e) {
>...
>   if (LOG.isDebugEnabled()) {
> LOG.fatal(msg, e);
>   } else {
> LOG.fatal(msg);
>   }
>   return false;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)