Re: About 2.7.4 Release

2017-05-03 Thread Konstantin Shvachko
Hey guys,

I and a few of my colleagues would like to help here and move 2.7.4 release
forward. A few points in this regard.

1. Reading through this thread since March 1 I see that Vinod hinted on
managing the release. Vinod, if you still want the job / have bandwidth
will be happy to work with you.
Otherwise I am glad to volunteer as the release manager.

2. In addition to current blockers and criticals, I would like to propose a
few issues to be included in the release, see the list below. Those are
mostly bug fixes and optimizations, which we already have in our internal
branch and run in production. Plus one minor feature "node labeling", which
we found very handy, when you have heterogeneous environments and mixed
workloads, like MR and Spark.

3. For marking issues for the release I propose to
 - set the target version to 2.7.4, and
 - add a new label "release-blocker"
That way we will know issues targeted for the release without reopening
them for backports.

4. I see quite a few people are interested in the release. With all the
help I think we can target to release by the end of May.

Other things include fixing CHANGES.txt and fixing Jenkins build for 2.7.4
branch.

Thanks,
--Konstantin

==  List of issue for 2.7.4  ===
-- Backports
HADOOP-12975 . Add du
jitters
HDFS-9710 . IBR batching
HDFS-10715 . NPE when
applying AvailableSpaceBlockPlacementPolicy
HDFS-2538 . fsck removal
of dot printing
HDFS-8131 . space-balanced
policy for balancer
HDFS-8549 . abort balancer
if upgrade in progress
HDFS-9412 . skip small
blocks in getBlocks

YARN-1471 . SLS simulator
YARN-4302 . SLS
YARN-4367 . SLS
YARN-4612 . SLS

- Node labeling
MAPREDUCE-6304 
YARN-2943 
YARN-4109 
YARN-4140 
YARN-4250 
YARN-4925 


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-05-03 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/392/

[May 2, 2017 2:36:54 PM] (aajisaka) HADOOP-14371. License error in 
TestLoadBalancingKMSClientProvider.java.
[May 2, 2017 2:52:34 PM] (aajisaka) HADOOP-14367. Remove unused setting from 
pom.xml. Contributed by Chen
[May 2, 2017 5:51:20 PM] (wang) HADOOP-14369. NetworkTopology calls expensive 
toString() when logging.
[May 2, 2017 6:49:19 PM] (wang) HADOOP-14281. Fix 
TestKafkaMetrics#testPutMetrics. Contributed by Alison
[May 2, 2017 8:06:47 PM] (templedf) YARN-6481. Yarn top shows negative 
container number in FS (Contributed
[May 2, 2017 9:50:51 PM] (jlowe) HADOOP-14306. TestLocalFileSystem tests have 
very low timeouts.
[May 3, 2017 12:51:28 AM] (rkanter) HADOOP-14352. Make some HttpServer2 SSL 
properties optional (jzhuge via
[May 3, 2017 1:34:11 AM] (shv) HDFS-11717. Add unit test for HDFS-11709 
StandbyCheckpointer should




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 350] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile

[jira] [Created] (HDFS-11745) Increase HDFS tests from 1 second to 10 seconds

2017-05-03 Thread Eric Badger (JIRA)
Eric Badger created HDFS-11745:
--

 Summary: Increase HDFS tests from 1 second to 10 seconds
 Key: HDFS-11745
 URL: https://issues.apache.org/jira/browse/HDFS-11745
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


1 second test timeouts are susceptible to failure on overloaded or otherwise 
slow machines



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11746) Ozone:SCM: Add support for getContainer in SCM

2017-05-03 Thread Nandakumar (JIRA)
Nandakumar created HDFS-11746:
-

 Summary: Ozone:SCM: Add support for getContainer in SCM
 Key: HDFS-11746
 URL: https://issues.apache.org/jira/browse/HDFS-11746
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Nandakumar
Assignee: Nandakumar


Adds support for getContainer in SCM. With this change we will be able to get 
the container pipeline from SCM using containerId.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11747) Ozone : need to fix OZONE_SCM_DEFAULT_PORT

2017-05-03 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11747:
-

 Summary: Ozone : need to fix  OZONE_SCM_DEFAULT_PORT
 Key: HDFS-11747
 URL: https://issues.apache.org/jira/browse/HDFS-11747
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


We were deploying things in a physical cluster, and found an issue that 
{{OZONE_SCM_DEFAULT_PORT}} should be set to {{OZONE_SCM_DATANODE_PORT_DEFAULT}} 
instead of 9862 in the config keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11515) -du throws ConcurrentModificationException

2017-05-03 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reopened HDFS-11515:


I ran du on a CDH5.11.0 + HDFS-11515 patch NameNode using a production cluster 
fsimage. Unfortunately the same bug still exists. It probably failed for 
another scenario.

Reopen this jira and let's discuss the next step.

> -du throws ConcurrentModificationException
> --
>
> Key: HDFS-11515
> URL: https://issues.apache.org/jira/browse/HDFS-11515
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, shell
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Istvan Fajth
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11515.001.patch, HDFS-11515.002.patch, 
> HDFS-11515.003.patch, HDFS-11515.004.patch, HDFS-11515.test.patch
>
>
> HDFS-10797 fixed a disk summary (-du) bug, but it introduced a new bug.
> The bug can be reproduced running the following commands:
> {noformat}
> bash-4.1$ hdfs dfs -mkdir /tmp/d0
> bash-4.1$ hdfs dfsadmin -allowSnapshot /tmp/d0
> Allowing snaphot on /tmp/d0 succeeded
> bash-4.1$ hdfs dfs -touchz /tmp/d0/f4
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1
> bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s1
> Created snapshot /tmp/d0/.snapshot/s1
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2/d4
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3/d5
> bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s2
> Created snapshot /tmp/d0/.snapshot/s2
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2/d4
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3/d5
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3
> bash-4.1$ hdfs dfs -du -h /tmp/d0
> du: java.util.ConcurrentModificationException
> 0 0 /tmp/d0/f4
> {noformat}
> A ConcurrentModificationException forced du to terminate abruptly.
> Correspondingly, NameNode log has the following error:
> {noformat}
> 2017-03-08 14:32:17,673 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 4 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getContentSumma
> ry from 10.0.0.198:49957 Call#2 Retry#0
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at 
> org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext.tallyDeletedSnapshottedINodes(ContentSummaryComputationContext.java:209)
> at 
> org.apache.hadoop.hdfs.server.namenode.INode.computeAndConvertContentSummary(INode.java:507)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getContentSummary(FSDirectory.java:2302)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:4535)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1087)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getContentSummary(AuthorizationProviderProxyClientProtocol.java:5
> 63)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.jav
> a:873)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)
> {noformat}
> The bug is due to a improper use of HashSet, not concurrent operations. 
> Basically, a HashSet can not be updated while an iterator is traversing it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11748) Deleting an open file should invalidates the block from expected locations.

2017-05-03 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-11748:
-

 Summary: Deleting an open file should invalidates the block from 
expected locations.
 Key: HDFS-11748
 URL: https://issues.apache.org/jira/browse/HDFS-11748
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Rushabh S Shah


Today deleting an open file doesn't invalidate the rbw blocks stored on 
datanode.
When the datanode sends a full block report and when namenode doesn't recognize 
the block id, it will ask the datanode to invalidate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11749) Ongoing file write fails when its pipeline DataNode is pulled out for maintenance

2017-05-03 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11749:
-

 Summary: Ongoing file write fails when its pipeline DataNode is 
pulled out for maintenance
 Key: HDFS-11749
 URL: https://issues.apache.org/jira/browse/HDFS-11749
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


HDFS Maintenance State HDFS-7877 is suppose to put DataNodes first to 
ENTERING_MAINTENANCE state and when all blocks are sufficiently replicated, DNs 
transition to IN_MAINTENANCE state. Also, the UNDER_CONSTRUCTION files and any 
ongoing writes to these files should not fail by the maintenance state 
transition. But, in few runs I have seen the ongoing writes to open files fail 
as its pipeline DNs are pulled out via Maintenance State feature. Test case is 
attached.

{code}
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:49306,DS-eeca7153-fba2-4f2e-a044-0a292fc6dc6d,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:49302,DS-a5adf33c-81d0-413b-879c-9c4d9acbb72a,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:49306,DS-eeca7153-fba2-4f2e-a044-0a292fc6dc6d,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:49302,DS-a5adf33c-81d0-413b-879c-9c4d9acbb72a,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.

at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1299)
at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1365)
at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1545)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1460)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1443)
at 
org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1251)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:668)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-05-03 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/303/

[May 2, 2017 2:36:54 PM] (aajisaka) HADOOP-14371. License error in 
TestLoadBalancingKMSClientProvider.java.
[May 2, 2017 2:52:34 PM] (aajisaka) HADOOP-14367. Remove unused setting from 
pom.xml. Contributed by Chen
[May 2, 2017 5:51:20 PM] (wang) HADOOP-14369. NetworkTopology calls expensive 
toString() when logging.
[May 2, 2017 6:49:19 PM] (wang) HADOOP-14281. Fix 
TestKafkaMetrics#testPutMetrics. Contributed by Alison
[May 2, 2017 8:06:47 PM] (templedf) YARN-6481. Yarn top shows negative 
container number in FS (Contributed
[May 2, 2017 9:50:51 PM] (jlowe) HADOOP-14306. TestLocalFileSystem tests have 
very low timeouts.
[May 3, 2017 12:51:28 AM] (rkanter) HADOOP-14352. Make some HttpServer2 SSL 
properties optional (jzhuge via
[May 3, 2017 1:34:11 AM] (shv) HDFS-11717. Add unit test for HDFS-11709 
StandbyCheckpointer should




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.ha.TestZKFailoverControllerStress 
   hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 
   hadoop.hdfs.server.blockmanagement.TestPendingReconstruction 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.server.namenode.TestMetadataVersionOutput 
   hadoop.hdfs.server.namenode.TestCheckpoint 
   hadoop.hdfs.server.datanode.TestDataNodeLifeline 
   hadoop.hdfs.TestSafeMode 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.hdfs.TestDistributedFileSystem 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   
hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 
   hadoop.hdfs.server.namenode.ha.TestEditLogTailer 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 
   hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.namenode.TestStartup 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapred.TestShuffleHandler 
   hadoop.tools.TestHadoopArchiveLogsRunner 
   hadoop.mapred.gridmix.TestGridmixSubmission 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 

Timed out junit tests :

   org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults 
   org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStorePerf 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-li

Re: About 2.7.4 Release

2017-05-03 Thread Zhe Zhang
Thanks for volunteering as RM Konstantin! The plan LGTM.

I've created a nightly Jenkins job for branch-2.7 (unit tests):
https://builds.apache.org/job/Hadoop-branch2.7-nightly/

On Wed, May 3, 2017 at 12:42 AM Konstantin Shvachko 
wrote:

> Hey guys,
>
> I and a few of my colleagues would like to help here and move 2.7.4 release
> forward. A few points in this regard.
>
> 1. Reading through this thread since March 1 I see that Vinod hinted on
> managing the release. Vinod, if you still want the job / have bandwidth
> will be happy to work with you.
> Otherwise I am glad to volunteer as the release manager.
>
> 2. In addition to current blockers and criticals, I would like to propose a
> few issues to be included in the release, see the list below. Those are
> mostly bug fixes and optimizations, which we already have in our internal
> branch and run in production. Plus one minor feature "node labeling", which
> we found very handy, when you have heterogeneous environments and mixed
> workloads, like MR and Spark.
>
> 3. For marking issues for the release I propose to
>  - set the target version to 2.7.4, and
>  - add a new label "release-blocker"
> That way we will know issues targeted for the release without reopening
> them for backports.
>
> 4. I see quite a few people are interested in the release. With all the
> help I think we can target to release by the end of May.
>
> Other things include fixing CHANGES.txt and fixing Jenkins build for 2.7.4
> branch.
>
> Thanks,
> --Konstantin
>
> ==  List of issue for 2.7.4  ===
> -- Backports
> HADOOP-12975 . Add du
> jitters
> HDFS-9710 . IBR batching
> HDFS-10715 . NPE when
> applying AvailableSpaceBlockPlacementPolicy
> HDFS-2538 . fsck removal
> of dot printing
> HDFS-8131 .
> space-balanced
> policy for balancer
> HDFS-8549 . abort
> balancer
> if upgrade in progress
> HDFS-9412 . skip small
> blocks in getBlocks
>
> YARN-1471 . SLS simulator
> YARN-4302 . SLS
> YARN-4367 . SLS
> YARN-4612 . SLS
>
> - Node labeling
> MAPREDUCE-6304 
> YARN-2943 
> YARN-4109 
> YARN-4140 
> YARN-4250 
> YARN-4925 
>
-- 
Zhe Zhang
Apache Hadoop Committer
http://zhe-thoughts.github.io/about/ | @oldcap