[VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-17 Thread Junping Du
Hi all,
 With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
for Apache Hadoop 2.8.0.

 This is the next minor release to follow up 2.7.0 which has been released 
for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
features. Most of these commits are released for the first time in branch-2.

  More information about the 2.8.0 release plan can be found here: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release

  New RC is available at: 
http://home.apache.org/~junping_du/hadoop-2.8.0-RC3

  The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
91f2b7a13d1e97be65db92ddabc627cc29ac0009

  The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1057

  Please try the release and vote; the vote will run for the usual 5 days, 
ending on 03/22/2017 PDT time.

Thanks,

Junping


[jira] [Created] (HDFS-11540) PeerCache sync overhead

2017-03-17 Thread Ravikumar (JIRA)
Ravikumar created HDFS-11540:


 Summary: PeerCache sync overhead
 Key: HDFS-11540
 URL: https://issues.apache.org/jira/browse/HDFS-11540
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.1
Reporter: Ravikumar
Priority: Minor


PeerCache has a fair amount sync over-head. In addition to class level locking, 
expired sockets & eldest socket are closed in critical section during gets & 
puts respectively..

A ConcurrentLinkedHashMap with appropriate fencing using DatanodeID could speed 
it up?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11541) RawErasureEncoder and RawErasureDecoder release() methods not called

2017-03-17 Thread JIRA
László Bence Nagy created HDFS-11541:


 Summary: RawErasureEncoder and RawErasureDecoder release() methods 
not called
 Key: HDFS-11541
 URL: https://issues.apache.org/jira/browse/HDFS-11541
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding, native
Affects Versions: 3.0.0-alpha2
Reporter: László Bence Nagy


The *RawErasureEncoder* and *RawErasureDecoder* classes have _release()_ 
methods which are not called from the source code. These methods should be 
called when an encoding or decoding operation is finished so that the 
dynamically allocated resources can be freed. Underlying native plugins can 
also rely on these functions to release their resources.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11542) Fix RawErasureCoderBenchmark decoding operation

2017-03-17 Thread JIRA
László Bence Nagy created HDFS-11542:


 Summary: Fix RawErasureCoderBenchmark decoding operation
 Key: HDFS-11542
 URL: https://issues.apache.org/jira/browse/HDFS-11542
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding
Affects Versions: 3.0.0-alpha2
Reporter: László Bence Nagy
Priority: Minor


There are some issues with the decode operation in the 
*RawErasureCoderBenchmark.java* file. The decoding method is called like this: 
*decoder.decode(decodeInputs, ERASED_INDEXES, outputs);*. 

Using RS 6+3 configuration it could be called with these parameters correctly 
like this: *decode([ d0, NULL, d2, d3, NULL, d5, p0, NULL, p2 ], [ 1, 4, 7 ], [ 
-, -, - ])*. The 1,4,7 indexes are in the *ERASED_INDEXES* array so in the 
*decodeInputs* array the values at those indexes are set to NULL, all other 
data and parity packets are present in the array. The *outputs* array's length 
is 3, where the d1, d4 and p1 packets should be reconstructed. This would be 
the right solution.

Right now this example would be called like this: *decode([ d0, d1, d2, d3, d4, 
d5, -, -, - ], [ 1, 4, 7 ], [ -, -, - ])*. So it has two main problems with the 
*decodeInputs* array. Firstly, the packets are not set to NULL where they 
should be based on the *ERASED_INDEXES* array. Secondly, it does not have any 
parity packets for decoding.

The first problem is easy to solve, the values at the proper indexes need to be 
set to NULL. The latter one is a little harder because right now multiple 
rounds of encode operations are done one after another and similarly multiple 
decode operations are called one by one. Encode and decode pairs should be 
called one after another so that the encoded parity packets can be used in the 
*decodeInputs* array as a parameter for decode. (Of course, their performance 
should be still measured separately.)

Moreover, there is one more problem in this file. Right now it works with RS 
6+3 and the *ERASED_INDEXES* array is fixed to *[ 6, 7, 8 ]*. So the three 
parity packets are needed to be reconstructed. This means that no real decode 
performance is measured because no data packet is needed to be reconstructed 
(even if the decode works properly). Actually, only new parity packets are 
needed to be encoded. The exact implementation depends on the underlying 
erasure coding plugin, but the point is that data packets should also be erased 
to measure real decode performance.

In addition to this, more RS configurations (not just 6+3) could be measured as 
well to be able to compare them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11543) Test multiple erasure coding implementations

2017-03-17 Thread JIRA
László Bence Nagy created HDFS-11543:


 Summary: Test multiple erasure coding implementations
 Key: HDFS-11543
 URL: https://issues.apache.org/jira/browse/HDFS-11543
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding
Affects Versions: 3.0.0-alpha2
Reporter: László Bence Nagy
Priority: Minor


Potentially, multiple native erasure coding plugins will be available to be 
used from HDFS later on. These plugins should be tested as well. For example, 
the *NativeRSRawErasureCoderFactory* class - which is used for instantiating 
the native ISA-L plugin's encoder and decoder objects - are used in 5 test 
files under the 
*hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/* 
directory. The files are:
- *TestDFSStripedInputStream.java*
- *TestDFSStripedOutputStream.java*
- *TestDFSStripedOutputStreamWithFailure.java*
- *TestReconstructStripedFile.java*
- *TestUnsetAndChangeDirectoryEcPolicy.java*

Other erasure coding plugins should be tested in these cases as well in a nice 
way (not by for example making a new file for every new erasure coding plugin). 
For this purpose [parameterized 
tests|https://github.com/junit-team/junit4/wiki/parameterized-tests] might be 
used.

This is also true for the 
*hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/*
 directory where this approach could be used for example for the 
interoperability tests (when it is checked that certain erasure coding 
implementations are compatible with each other by doing the encoding and 
decoding operations with different plugins and verifying their results). The 
plugin pairs which should be tested could be the parameters for the 
parameterized tests.

The parameterized test is just an idea, there can be other solutions as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11544) libhdfs++: Improve C API error reporting

2017-03-17 Thread James Clampffer (JIRA)
James Clampffer created HDFS-11544:
--

 Summary: libhdfs++: Improve C API error reporting
 Key: HDFS-11544
 URL: https://issues.apache.org/jira/browse/HDFS-11544
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


The thread local string used for hdfsGetLastError wasn't reset between calls so 
it could give stale results in confusing ways.  Now it gets reset with a 
placeholder that says that  hasn't set an error string.

Also fixed indentation that wasn't consistent + marked which functions are used 
by the hdfs.h API and hdfs_ext.h API to make it easier to see when changes 
could break compatibility.  Included some minor cleanup for the common case 
catch blocks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-03-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/

[Mar 16, 2017 2:08:30 PM] (stevel) HDFS-11431. hadoop-hdfs-client JAR does not 
include
[Mar 16, 2017 2:30:10 PM] (jlowe) YARN-4051. ContainerKillEvent lost when 
container is still recovering
[Mar 16, 2017 3:54:59 PM] (kihwal) HDFS-10601. Improve log message to include 
hostname when the NameNode is
[Mar 16, 2017 7:06:51 PM] (stevel) Revert "HDFS-11431. hadoop-hdfs-client JAR 
does not include
[Mar 16, 2017 7:20:46 PM] (jitendra) HDFS-11533. reuseAddress option should be 
used for child channels in
[Mar 16, 2017 10:07:38 PM] (wang) HDFS-10530. BlockManager reconstruction work 
scheduling should correctly
[Mar 16, 2017 11:08:32 PM] (liuml07) HADOOP-14191. Duplicate hadoop-minikdc 
dependency in hadoop-common
[Mar 17, 2017 1:13:43 AM] (arp) HDFS-10394. move declaration of okhttp version 
from hdfs-client to




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-compile-javac-root.txt
  [180K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [296K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-03-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/

[Mar 16, 2017 2:08:30 PM] (stevel) HDFS-11431. hadoop-hdfs-client JAR does not 
include
[Mar 16, 2017 2:30:10 PM] (jlowe) YARN-4051. ContainerKillEvent lost when 
container is still recovering
[Mar 16, 2017 3:54:59 PM] (kihwal) HDFS-10601. Improve log message to include 
hostname when the NameNode is
[Mar 16, 2017 7:06:51 PM] (stevel) Revert "HDFS-11431. hadoop-hdfs-client JAR 
does not include
[Mar 16, 2017 7:20:46 PM] (jitendra) HDFS-11533. reuseAddress option should be 
used for child channels in
[Mar 16, 2017 10:07:38 PM] (wang) HDFS-10530. BlockManager reconstruction work 
scheduling should correctly
[Mar 16, 2017 11:08:32 PM] (liuml07) HADOOP-14191. Duplicate hadoop-minikdc 
dependency in hadoop-common
[Mar 17, 2017 1:13:43 AM] (arp) HDFS-10394. move declaration of okhttp version 
from hdfs-client to




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapreduce.TestMRJobClient 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   
org.apache.hadoop.yarn.client.api.impl.TestOpportunisticContainerAllocation 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-compile-root.txt
  [132K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-compile-root.txt
  [132K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-compile-root.txt
  [132K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [136K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [236K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell

Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-17 Thread Eric Payne
+1
Thanks, Junping, for your efforts to get this release out.
I downloaded and built the source, and i did the following manual testing on a 
2-node pseudo cluster:
- Streaming job
- Inter-queue (cross-queue) preemption--verified that only the expected amount 
of preemptions occured.
- Inter-queue (in-queue) preemption with higher priority apps preempting lower 
ones.
- Limited node label testing.
- Yarn distributed shell, both with and without keeping containers across AM 
restart.
- Killing apps from Application UI





From: Junping Du 
To: "common-...@hadoop.apache.org" ; 
"hdfs-dev@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org" ; 
"mapreduce-...@hadoop.apache.org"  
Sent: Friday, March 17, 2017 4:18 AM
Subject: [VOTE] Release Apache Hadoop 2.8.0 (RC3)



Hi all,
 With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
for Apache Hadoop 2.8.0.

 This is the next minor release to follow up 2.7.0 which has been released 
for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
features. Most of these commits are released for the first time in branch-2.

  More information about the 2.8.0 release plan can be found here: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release

  New RC is available at: 
http://home.apache.org/~junping_du/hadoop-2.8.0-RC3

  The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
91f2b7a13d1e97be65db92ddabc627cc29ac0009

  The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1057

  Please try the release and vote; the vote will run for the usual 5 days, 
ending on 03/22/2017 PDT time.


Thanks,

Junping

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-17 Thread Daniel Templeton
Thanks for the new RC, Junping.  I built from source and tried it out on 
a 2-node cluster with HA enabled.  I ran a pi job and some streaming 
jobs.  I tested that localization and failover work correctly, and I 
played a little with the YARN and HDFS web UIs.


I did encounter an old friend of mine, which is that if you submit a 
streaming job with input that is only 1 block, you will nonetheless get 
2 mappers that both process the same split. What's new this time is that 
the second mapper was consistently failing on certain input sizes.  I 
(re)verified that the issue also exists is 2.7.3, so it's not a 
regression.  I'm pretty sure it's been there since at least 2.6.0.  I 
filed MAPREDUCE-6864 for it.


Given that my issue was not a regression, I'm +1 on the RC.

Daniel

On 3/17/17 2:18 AM, Junping Du wrote:

Hi all,
  With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
for Apache Hadoop 2.8.0.

  This is the next minor release to follow up 2.7.0 which has been released 
for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
features. Most of these commits are released for the first time in branch-2.

   More information about the 2.8.0 release plan can be found here: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release

   New RC is available at: 
http://home.apache.org/~junping_du/hadoop-2.8.0-RC3

   The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
91f2b7a13d1e97be65db92ddabc627cc29ac0009

   The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1057

   Please try the release and vote; the vote will run for the usual 5 days, 
ending on 03/22/2017 PDT time.

Thanks,

Junping




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11545) Propagate DataNode's slow disks info to the NameNode via Heartbeat

2017-03-17 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-11545:
-

 Summary: Propagate DataNode's slow disks info to the NameNode via 
Heartbeat
 Key: HDFS-11545
 URL: https://issues.apache.org/jira/browse/HDFS-11545
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


HDFS-11461 introduces slow disk detection in datanode. This information can be 
propagated to the NameNode so that the NameNode gets all DataNodes disk 
information. The NameNode can compare the stats from all the DataNodes together.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11546) Federation Router RPC server

2017-03-17 Thread Inigo Goiri (JIRA)
Inigo Goiri created HDFS-11546:
--

 Summary: Federation Router RPC server
 Key: HDFS-11546
 URL: https://issues.apache.org/jira/browse/HDFS-11546
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Inigo Goiri


RPC server side of the Federation Router implements ClientProtocol.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-17 Thread Mingliang Liu
Thanks Junping for doing this.

+1 (non-binding)

0. Download the src tar.gz file; checked the MD5 checksum
1. Build Hadoop from source successfully
2. Deploy a single node cluster and start the cluster successfully
3. Operate the HDFS from command line: ls, put, distcp, dfsadmin etc
4. Run hadoop mapreduce examples: grep
5. Operate AWS S3 using S3A schema from commandline: ls, cat, distcp
6. Check the HDFS service logs

L

> On Mar 17, 2017, at 2:18 AM, Junping Du  wrote:
> 
> Hi all,
> With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
> for Apache Hadoop 2.8.0.
> 
> This is the next minor release to follow up 2.7.0 which has been released 
> for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
> features. Most of these commits are released for the first time in branch-2.
> 
>  More information about the 2.8.0 release plan can be found here: 
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
> 
>  New RC is available at: 
> http://home.apache.org/~junping_du/hadoop-2.8.0-RC3
> 
>  The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
> 91f2b7a13d1e97be65db92ddabc627cc29ac0009
> 
>  The maven artifacts are available via repository.apache.org at: 
> https://repository.apache.org/content/repositories/orgapachehadoop-1057
> 
>  Please try the release and vote; the vote will run for the usual 5 days, 
> ending on 03/22/2017 PDT time.
> 
> Thanks,
> 
> Junping



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-17 Thread Jason Lowe
+1 (binding)
- Verfied signatures and digests- Performed a native build from the release 
tag- Deployed to a single node cluster- Ran some sample jobs
Jason
 

On Friday, March 17, 2017 4:18 AM, Junping Du  wrote:
 

 Hi all,
    With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
for Apache Hadoop 2.8.0.

    This is the next minor release to follow up 2.7.0 which has been released 
for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
features. Most of these commits are released for the first time in branch-2.

      More information about the 2.8.0 release plan can be found here: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release

      New RC is available at: 
http://home.apache.org/~junping_du/hadoop-2.8.0-RC3

      The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
91f2b7a13d1e97be65db92ddabc627cc29ac0009

      The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1057

      Please try the release and vote; the vote will run for the usual 5 days, 
ending on 03/22/2017 PDT time.

Thanks,

Junping

   

[jira] [Created] (HDFS-11547) Restore logs for slow BlockReceiver while writing data to disk

2017-03-17 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-11547:


 Summary: Restore logs for slow BlockReceiver while writing data to 
disk
 Key: HDFS-11547
 URL: https://issues.apache.org/jira/browse/HDFS-11547
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou


The logs for slow BlockReceiver while writing data to disk have been removed 
accidentally. They should be added back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org