[jira] [Created] (HDFS-10191) [NNBench] OP_DELETE Operation is'nt working

2016-03-22 Thread J.Andreina (JIRA)
J.Andreina created HDFS-10191:
-

 Summary: [NNBench] OP_DELETE Operation is'nt working
 Key: HDFS-10191
 URL: https://issues.apache.org/jira/browse/HDFS-10191
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: J.Andreina
Assignee: J.Andreina


After the fix of MAPREDUCE-6363 , in NNBench OP_DELETE Operation is'nt working



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10192) Namenode safemode not coming out during failover

2016-03-22 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-10192:
---

 Summary: Namenode safemode not coming out during failover
 Key: HDFS-10192
 URL: https://issues.apache.org/jira/browse/HDFS-10192
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Scenario:
===
write some blocks
wait till roll edits happen
Stop SNN
Delete some blocks in ANN, wait till the blocks are deleted in DN also.
restart the SNN and Wait till block reports come from datanode to SNN
Kill ANN then make SNN to Active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9930) libhdfs++: add hooks to facilitate fault testing

2016-03-22 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen resolved HDFS-9930.
--
Resolution: Duplicate

Duplicate of HDFS-9616.

Thanks, [~James Clampffer]

> libhdfs++: add hooks to facilitate fault testing
> 
>
> Key: HDFS-9930
> URL: https://issues.apache.org/jira/browse/HDFS-9930
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-8727) Allow using path style addressing for accessing the s3 endpoint

2016-03-22 Thread Stephen Montgomery (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Montgomery reopened HDFS-8727:
--
  Assignee: (was: Andrew Baptist)

Hi,
I'd like to re-open this ticket please. I've done some further digging into 
this and believe that Andrew's original patch is still needed ie using a Hadoop 
S3A config property flag to "switch on" path style access in the underlying 
Amazon S3 client. Overriding the custom S3A endpoint has no effect (unless you 
specifically use an IPv4 address which is more a workaround/hack).

To force/trick the Amazon S3 client to use old path style access (instead of 
virtual hosting) you can use dodgy bucket names (eg '..', '.-' in the name, 
caps etc) and IPv4 addresses for the endpoint - see 
com.amazonaws.services.s3.AmazonS3Client. configRequest() method - pretty much 
making sure that the DNS lookups will fail for syntactic reasons.

I'm happy to update Andrew's original patch and supply a test case, if needed. 
Like Andrew mentioned, the test case will be of no real benefit as it will just 
exercising Amazon client functionality. It's also hard to do as the AWS client 
is pretty inaccessible around confirming the flag has been set.

Whats the process of re-opening this ticket? What Hadoop branch will this be 
targeted for ie it looks that 2.8 one has all of the S3A fixes...?

Thanks,
Stephen


> Allow using path style addressing for accessing the s3 endpoint
> ---
>
> Key: HDFS-8727
> URL: https://issues.apache.org/jira/browse/HDFS-8727
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Baptist
>  Labels: features
> Attachments: hdfs-8728.patch.2
>
>
> There is no ability to specify using path style access for the s3 endpoint. 
> There are numerous non-amazon implementations of storage that support the 
> amazon API's but only support path style access such as Cleversafe and Ceph. 
> Additionally in many environments it is difficult to configure DNS correctly 
> to get virtual host style addressing to work



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10193) fuse_dfs segfaults if uid cannot be resolved to a username

2016-03-22 Thread John Thiltges (JIRA)
John Thiltges created HDFS-10193:


 Summary: fuse_dfs segfaults if uid cannot be resolved to a username
 Key: HDFS-10193
 URL: https://issues.apache.org/jira/browse/HDFS-10193
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 2.6.0, 2.0.0-alpha
 Environment: Confirmed with Cloudera 
hadoop-hdfs-fuse-2.6.0+cdh5.5.0+921-1.cdh5.5.0.p0.15.el6.x86_64 on CentOS 6
Reporter: John Thiltges


When a user does an 'ls' on a HDFS FUSE mount, dfs_getattr() is called and 
fuse_dfs attempts to resolve the user's uid into a username string with 
getUsername(). If this lookup is unsuccessful, getUsername() returns NULL 
leading to a segfault in hdfsConnCompare().

Sites storing NSS info in a remote database (such as LDAP) will occasionally 
have NSS failures if there are connectivity or daemon issues. Running processes 
accessing the HDFS mount during this time may cause the fuse_dfs process to 
crash, disabling the mount.

To reproduce the issue:
1) Add a new local user
2) su to the new user
3) As root, edit /etc/passwd, changing the new user's uid number
4) As the new user, do an ls on an HDFS FUSE mount. This should cause a 
segfault.


Backtrace from fuse_dfs segfault 
(hadoop-hdfs-fuse-2.0.0+545-1.cdh4.1.1.p0.21.osg33.el6.x86_64)
{noformat}
#0  0x003f43c32625 in raise (sig=) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x003f43c33e05 in abort () at abort.c:92
#2  0x003f46beb785 in os::abort (dump_core=true) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os/linux/vm/os_linux.cpp:1640
#3  0x003f46d5f03f in VMError::report_and_die (this=0x7ffa3cdf86f0) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:1075
#4  0x003f46d5f70b in crash_handler (sig=11, info=0x7ffa3cdf88b0, 
ucVoid=0x7ffa3cdf8780) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os/linux/vm/vmError_linux.cpp:106
#5  
#6  os::is_first_C_frame (fr=) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/runtime/os.cpp:1025
#7  0x003f46d5e071 in VMError::report (this=0x7ffa3cdf9560, 
st=0x7ffa3cdf93e0) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:617
#8  0x003f46d5ebad in VMError::report_and_die (this=0x7ffa3cdf9560) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:1009
#9  0x003f46bf0322 in JVM_handle_linux_signal (sig=11, info=0x7ffa3cdf9730, 
ucVoid=0x7ffa3cdf9600, abort_if_unrecognized=1021285600) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os_cpu/linux_x86/vm/os_linux_x86.cpp:531
#10 
#11 __strcmp_sse42 () at ../sysdeps/x86_64/multiarch/strcmp.S:259
#12 0x00403d3d in hdfsConnCompare (head=, 
elm=) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:204
#13 hdfsConnTree_RB_FIND (head=, elm=) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:81
#14 0x00404245 in hdfsConnFind (usrname=0x0, ctx=0x7ff95013b800, 
out=0x7ffa3cdf9c60) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:220
#15 fuseConnect (usrname=0x0, ctx=0x7ff95013b800, out=0x7ffa3cdf9c60) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:517
#16 0x00404337 in fuseConnectAsThreadUid (conn=0x7ffa3cdf9c60) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:544
#17 0x00404c55 in dfs_getattr (path=0x7ff950150de0 "/user/users01", 
st=0x7ffa3cdf9d20) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_getattr.c:37
#18 0x003f47c0b353 in lookup_path (f=0x15e39f0, nodeid=22546, 
name=0x7ff9602d0058 "users01", path=, e=0x7ffa3cdf9d10, 
fi=) at fuse.c:1824
#19 0x003f47c0d865 in fuse_lib_lookup (req=0x7ff950003fe0, parent=22546, 
name=0x7ff9602d0058 "users01") at fuse.c:2017
#20 0x003f47c120ef in fuse_do_work (data=0x7ff9600e3f30) at 
fuse_loop_mt.c:107
#21 0x003f44407aa1 in start_thread (arg=0x7ffa3cdfa700) at 
pthread_create.c:301
#22 0x003f43ce893d in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:115
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-350) DFSClient more robust if the namenode is busy doing GC

2016-03-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-350.

Resolution: Not A Problem

I'm resolving this issue.  In current versions, the client is more robust to 
this kind of failure.  The RPC layer implements retry policies.  Retried 
operations are handled gracefully using either an inherently idempotent 
implementation of the RPC or the retry cache for at-most-once execution.  In 
the event of an extremely long GC, the client would either retry and succeed 
after completion of the GC, or in more extreme cases it would trigger an HA 
failover and the client would successfully issue its call to the the new active 
NameNode.

> DFSClient more robust if the namenode is busy doing GC
> --
>
> Key: HDFS-350
> URL: https://issues.apache.org/jira/browse/HDFS-350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
>
> In the current code, if the client (writer) encounters an RPC error while 
> fetching a new block id from the namenode, it does not retry. It throws an 
> exception to the application. This becomes especially bad if the namenode is 
> in the middle of a GC and does not respond in time. The reason the client 
> throws an exception is because it does not know whether the namenode 
> successfully allocated a block for this file.
> One possible enhancement would be to make the client retry the addBlock RPC 
> if needed. The client can send the block list that it currently has. The 
> namenode can match the block list send by the client with what it has in its 
> own metadata and then send back a new blockid (or a previously allocated 
> blockid that the client had not yet received because the earlier RPC 
> timedout). This will make the client more robust!
> This works even when we support Appends because the namenode will *always* 
> verify that the client has the lease for the file in question.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10194) FSDataOutputStream.write() allocates new byte buffer on each operation

2016-03-22 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HDFS-10194:


 Summary: FSDataOutputStream.write() allocates new byte buffer on 
each operation
 Key: HDFS-10194
 URL: https://issues.apache.org/jira/browse/HDFS-10194
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.1
Reporter: Vladimir Rodionov


This is the code:
{code}
 private DFSPacket createPacket(int packetSize, int chunksPerPkt, long 
offsetInBlock, long seqno, boolean lastPacketInBlock) throws 
InterruptedIOException {
 final byte[] buf;
 final int bufferSize = PacketHeader.PKT_MAX_HEADER_LEN +   packetSize;
 
 try {
   buf = byteArrayManager.newByteArray(bufferSize);
 } catch (InterruptedException ie) {
   final InterruptedIOException iioe = new InterruptedIOException(
   "seqno=" + seqno);
   iioe.initCause(ie);
   throw iioe;
 }
 
 return new DFSPacket(buf, chunksPerPkt, offsetInBlock, seqno,
  getChecksumSize(), lastPacketInBlock);
}
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10195) Ozone: Add container persistance

2016-03-22 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10195:
---

 Summary: Ozone: Add container persistance
 Key: HDFS-10195
 URL: https://issues.apache.org/jira/browse/HDFS-10195
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-7240


Adds file based persistence for containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-10194) FSDataOutputStream.write() allocates new byte buffer on each operation

2016-03-22 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov resolved HDFS-10194.
--
Resolution: Invalid

HDFS-7276 provides ByteArrayManager which is off, by default. Enabling this 
feature resolves the issue.

> FSDataOutputStream.write() allocates new byte buffer on each operation
> --
>
> Key: HDFS-10194
> URL: https://issues.apache.org/jira/browse/HDFS-10194
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Vladimir Rodionov
>
> This is the code:
> {code}
>  private DFSPacket createPacket(int packetSize, int chunksPerPkt, long 
> offsetInBlock, long seqno, boolean lastPacketInBlock) throws 
> InterruptedIOException {
>  final byte[] buf;
>  final int bufferSize = PacketHeader.PKT_MAX_HEADER_LEN +   packetSize;
>  
>  try {
>buf = byteArrayManager.newByteArray(bufferSize);
>  } catch (InterruptedException ie) {
>final InterruptedIOException iioe = new InterruptedIOException(
>"seqno=" + seqno);
>iioe.initCause(ie);
>throw iioe;
>  }
>  
>  return new DFSPacket(buf, chunksPerPkt, offsetInBlock, seqno,
>   getChecksumSize(), lastPacketInBlock);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10196) Ozone : Enable better error reporting for failed commands in ozone shell

2016-03-22 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10196:
---

 Summary: Ozone : Enable better error reporting for failed commands 
in ozone shell
 Key: HDFS-10196
 URL: https://issues.apache.org/jira/browse/HDFS-10196
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer
Priority: Trivial
 Fix For: HDFS-7240


Fix the error message printing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10197) TestFsDatasetCache failing intermittently due to timeout

2016-03-22 Thread Lin Yiqun (JIRA)
Lin Yiqun created HDFS-10197:


 Summary: TestFsDatasetCache failing intermittently due to timeout
 Key: HDFS-10197
 URL: https://issues.apache.org/jira/browse/HDFS-10197
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Lin Yiqun
Assignee: Lin Yiqun


In {{TestFsDatasetCache}}, the unit tests failed sometimes. I collected some 
failed reason in recent jenkins reports. They are all timeout errors.
{code}
Tests in error: 
  TestFsDatasetCache.testFilesExceedMaxLockedMemory:378 ? Timeout Timed out 
wait...
  TestFsDatasetCache.tearDown:149 ? Timeout Timed out waiting for condition. 
Thr...
{code}
{code}
Tests in error: 
  TestFsDatasetCache.testPageRounder:474 ?  test timed out after 6 
milliseco...
  TestBalancer.testUnknownDatanodeSimple:1040->testUnknownDatanode:1098 ?  test 
...
{code}
But there was a little different between these failure.

* The first because the total block time was exceed the {{waitTimeMillis}}(here 
is 60s) and then throw the timeout exception and print thread diagnostic string.
{code}
long st = Time.now();
do {
  boolean result = check.get();
  if (result) {
return;
  }
  
  Thread.sleep(checkEveryMillis);
} while (Time.now() - st < waitForMillis);

throw new TimeoutException("Timed out waiting for condition. " +
"Thread diagnostics:\n" +
TimedOutTestsListener.buildThreadDiagnosticString());
{code}

* The second is due to test elapsed time more than timeout time setting. Like 
in {{TestFsDatasetCache#testPageRounder}}.

We should adjust timeout time for these unit test which would failed sometimes 
due to timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10198) File browser web UI should split to pages when files/dirs are too many

2016-03-22 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10198:
--

 Summary: File browser web UI should split to pages when files/dirs 
are too many
 Key: HDFS-10198
 URL: https://issues.apache.org/jira/browse/HDFS-10198
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.7.2
Reporter: Weiwei Yang


When there are large number of files/dirs, HDFS file browser UI takes too long 
to load, and it loads all items in one single page, causes so many problems to 
read. We should have it split to pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1016

2016-03-22 Thread Apache Jenkins Server
See 

Changes:

[gtcarrera9] MAPREDUCE-6110. JobHistoryServer CLI throws NullPointerException 
with

--
[...truncated 6838 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.442 sec - in 
org.apache.hadoop.hdfs.util.TestStripedBlockUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.083 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.205 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.402 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.458 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.924 sec - in 
org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.629 sec - in 
org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.962 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestErasureCodingPolicies
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.05 sec - in 
org.apache.hadoop.hdfs.TestErasureCodingPolicies
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.744 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.092 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.249 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.82 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 126.816 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.477 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.12 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
suppor

Hadoop-Hdfs-trunk-Java8 - Build # 1016 - Failure

2016-03-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1016/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7031 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:08 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:20 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.104 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:24 h
[INFO] Finished at: 2016-03-23T03:12:46+00:00
[INFO] Final Memory: 56M/475M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation.testBlockInvalidationWhenRBWReplicaMissedInDN

Error Message:
test timed out after 60 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 60 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation.testBlockInvalidationWhenRBWReplicaMissedInDN(TestRBWBlockInvalidation.java:121)




[jira] [Resolved] (HDFS-10198) File browser web UI should split to pages when files/dirs are too many

2016-03-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-10198.

   Resolution: Duplicate
Fix Version/s: 2.8.0

> File browser web UI should split to pages when files/dirs are too many
> --
>
> Key: HDFS-10198
> URL: https://issues.apache.org/jira/browse/HDFS-10198
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Weiwei Yang
>  Labels: ui
> Fix For: 2.8.0
>
>
> When there are a large number of files/dirs, HDFS file browser UI takes too 
> long to load, and it loads all items in one single page, causes so many 
> problems to read. We should have it split to pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #2948

2016-03-22 Thread Apache Jenkins Server
See 

Changes:

[gtcarrera9] MAPREDUCE-6110. JobHistoryServer CLI throws NullPointerException 
with

--
[...truncated 5131 lines...]
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.797 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 147.979 sec - 
in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.928 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.954 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.18 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.644 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.103 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.389 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.715 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.197 sec - in 
org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.545 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.167 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.914 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.674 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.269 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.966 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.471 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.888 sec - 
in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.608 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.176 sec - in 
org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.457 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.452 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.206 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.924 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.659 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Test

Hadoop-Hdfs-trunk - Build # 2948 - Still Failing

2016-03-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2948/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5324 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:32 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:28 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.067 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:34 h
[INFO] Finished at: 2016-03-23T03:25:40+00:00
[INFO] Final Memory: 57M/709M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1051)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeLifeline.testNoLifelineSentIfHeartbeatsOnTime

Error Message:
Expect metrics to count no lifeline calls. expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Expect metrics to count no lifeline calls. 
expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeLifeline.testNoLifelineSentIfHeartbeatsOnTime(TestDataNodeLifeline.java:256)