Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/

[Sep 19, 2019 1:27:23 PM] (kihwal) HADOOP-16582. LocalFileSystem's mkdirs() 
does not work as expected under
[Sep 19, 2019 8:21:42 PM] (ericp) YARN-7817. Add Resource reference to RM's 
NodeInfo object so REST API
[Sep 19, 2019 8:25:31 PM] (ericp) YARN-7860. Fix UT failure 
TestRMWebServiceAppsNodelabel#testAppsRunning.
[Sep 19, 2019 10:27:30 PM] (jhung) YARN-7410. Cleanup FixedValueResource to 
avoid dependency to




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.hdfs.TestDecommission 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [164K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/450/artifac

[jira] [Created] (HDFS-14860) Clean Up StoragePolicySatisfyManager.java

2019-09-20 Thread David Mollitor (Jira)
David Mollitor created HDFS-14860:
-

 Summary: Clean Up StoragePolicySatisfyManager.java
 Key: HDFS-14860
 URL: https://issues.apache.org/jira/browse/HDFS-14860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.2.0
Reporter: David Mollitor
Assignee: David Mollitor


Remove superfluous debug log guards, use {{java.util.concurrent}} package for 
internal structure instead of external synchronization.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14861) Reset LowRedundancyBlocks Iterator periodically

2019-09-20 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDFS-14861:


 Summary: Reset LowRedundancyBlocks Iterator periodically
 Key: HDFS-14861
 URL: https://issues.apache.org/jira/browse/HDFS-14861
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.3.0
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


When the namenode needs to schedule blocks for reconstruction, the blocks are 
placed into the neededReconstruction object in the BlockManager. This is an 
instance of LowRedundancyBlocks, which maintains a list of priority queues 
where the blocks are held until they are scheduled for reconstruction / 
replication.

Every 3 seconds, by default, a number of blocks are retrieved from 
LowRedundancyBlocks. The method LowRedundancyBlocks.chooseLowRedundancyBlocks() 
is used to retrieve the next set of blocks using a bookmarked iterator. Each 
call to this method moves the iterator forward. The number of blocks retrieved 
is governed by the formula:

number_of_live_nodes * dfs.namenode.replication.work.multiplier.per.iteration 
(default 2)

Then the namenode attempts to schedule those blocks on datanodes, but each 
datanode has a limit of how many blocks can be queued against it (controlled by 
dfs.namenode.replication.max-streams) so all of the retrieved blocks may not be 
scheduled. There may be other block availability reasons the blocks are not 
scheduled too.

As the iterator in chooseLowRedundancyBlocks() always moves forward, the blocks 
which were not scheduled are not retried until the end of the queue is reached 
and the iterator is reset.

If the replication queue is very large (eg several nodes are being 
decommissioned) or if blocks are being continuously added to the replication 
queue (eg nodes decommission using the proposal in HDFS-14854) it may take a 
very long time for the iterator to be reset to the start.

The result of this, could be a few blocks for a decommissioning or entering 
maintenance mode node getting left behind and it taking many hours or even days 
for them to be retried, and this could stop decommission completing.

With this Jira, I would like to suggest we reset the iterator after a 
configurable number of calls to chooseLowRedundancyBlocks() so any left behind 
blocks are retried.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-09-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1265/

[Sep 19, 2019 7:16:12 AM] (elek) HDDS-730. ozone fs cli prints hadoop fs in 
usage
[Sep 19, 2019 8:41:00 AM] (elek) HDDS-2147. Include dumpstream in test report
[Sep 19, 2019 9:18:16 AM] (elek) HDDS-2016. Add option to enforce GDPR in 
Bucket Create command
[Sep 19, 2019 9:59:43 AM] (elek) HDDS-2119. Use checkstyle.xml and 
suppressions.xml in hdds/ozone
[Sep 19, 2019 10:26:53 AM] (elek) HDDS-2148. Remove redundant code in 
CreateBucketHandler.java
[Sep 19, 2019 12:11:44 PM] (elek) HDDS-2141. Missing total number of operations
[Sep 19, 2019 1:23:35 PM] (kihwal) HADOOP-16582. LocalFileSystem's mkdirs() 
does not work as expected under
[Sep 19, 2019 3:00:05 PM] (stevel) HADOOP-16556. Fix some alerts raised by LGTM.
[Sep 19, 2019 4:41:55 PM] (aengineer) HDDS-2110. Arbitrary file can be 
downloaded with the help of
[Sep 19, 2019 4:50:21 PM] (aengineer) HDDS-2127. Detailed Tools doc not 
reachable
[Sep 19, 2019 5:58:33 PM] (bharat) HDDS-2151. Ozone client logs the entire 
request payload at DEBUG level
[Sep 19, 2019 6:00:10 PM] (inigoiri) HDFS-14609. RBF: Security should use 
common AuthenticationFilter.
[Sep 19, 2019 6:06:02 PM] (bharat) HDDS-1054. List Multipart uploads in a 
bucket (#1277)
[Sep 19, 2019 6:30:33 PM] (bharat) HDDS-2154. Fix Checkstyle issues (#1475)
[Sep 19, 2019 11:28:29 PM] (aengineer) HDDS-2101. Ozone filesystem provider 
doesn't exist (#1473)




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage 
   hadoop.hdfs.server.federation.router.TestRouterFaultTolerant 
   hadoop.mapreduce.v2.hs.TestJobHistoryParsing 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.fs.adl.live.TestAdlSdkConfiguration 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1265/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1265/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1265/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1265/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1265/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1265/artifact/out/diff-patch-pylint.txt
  [220K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1265/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1265/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1265/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1265/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   

[jira] [Created] (HDFS-14862) Review of MovedBlocks

2019-09-20 Thread David Mollitor (Jira)
David Mollitor created HDFS-14862:
-

 Summary: Review of MovedBlocks
 Key: HDFS-14862
 URL: https://issues.apache.org/jira/browse/HDFS-14862
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer & mover
Affects Versions: 3.2.0
Reporter: David Mollitor
Assignee: David Mollitor
 Attachments: HDFS-14862.1.patch

Internal data structure needs to be protected (synchronized) but is scoped as 
{{protected}} so any sub-class could modify without a lock.  Synchronize the 
collection itself for protection.  It also returns the internal data structure 
in {{getLocations}} so the structure could be modified outside of the lock.  
Create a copy instead.

{code:java}
/** The locations of the replicas of the block. */
protected final List locations = new ArrayList(3);

public Locations(Block block) {
  this.block = block;
}

/** clean block locations */
public synchronized void clearLocations() {
  locations.clear();
}
...
   /** @return its locations */
public synchronized List getLocations() {
  return locations;
}
{code}
 
[https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/MovedBlocks.java#L43]

Also, remove a bunch of superfluous and complicated code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14863) Remove Synchronization From BlockPlacementPolicyDefault

2019-09-20 Thread David Mollitor (Jira)
David Mollitor created HDFS-14863:
-

 Summary: Remove Synchronization From BlockPlacementPolicyDefault
 Key: HDFS-14863
 URL: https://issues.apache.org/jira/browse/HDFS-14863
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: block placement
Affects Versions: 3.2.0
Reporter: David Mollitor
Assignee: David Mollitor


https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L1010

The {{clusterMap}} has its own internal synchronization.  Also, these are only 
read operations so any changes applied to the {{clusterMap}} from another 
thread will be applied since no other thread synchronizes on the {{clusterMap}} 
itself (that I could find).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14864) DatanodeDescriptor Use Concurrent BlockingQueue

2019-09-20 Thread David Mollitor (Jira)
David Mollitor created HDFS-14864:
-

 Summary: DatanodeDescriptor Use Concurrent BlockingQueue
 Key: HDFS-14864
 URL: https://issues.apache.org/jira/browse/HDFS-14864
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.2.0
Reporter: David Mollitor
Assignee: David Mollitor


https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L104-L106

This collection needs to be thread safe and it needs to repeatedly poll the 
queue to drain it, so use {{BlockingQueue}} which has a {{drain()}} method just 
for this purpose:

{quote}
This operation may be more efficient than repeatedly polling this queue.
{quote}
[https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html#drainTo(java.util.Collection,%20int)]

Also, the collection returns 'null' if there is nothing to drain from the 
queue.  This is a confusing and error-prone affect.  It should just return an 
empty list.  I've also updated the code to be more consistent and to return a 
java {{List}} in all places instead of a {{List}} in some and a native array in 
others.  This will make the entire usage much more consistent and safe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[INFO] New branch created: HDDS-1880-Decom

2019-09-20 Thread Anu Engineer
Hi All,

Just FYI. I have created a new branch to pursue the decommission work for
Ozone data nodes. The branch is called "HDDS-1880-Decom" and the work is
tracked in
https://issues.apache.org/jira/browse/HDDS-1880


Thanks
Anu


[jira] [Created] (HDDS-2157) checkstyle: print filenames relative to project root

2019-09-20 Thread Doroszlai, Attila (Jira)
Doroszlai, Attila created HDDS-2157:
---

 Summary: checkstyle: print filenames relative to project root
 Key: HDDS-2157
 URL: https://issues.apache.org/jira/browse/HDDS-2157
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: build
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Currently {{checkstyle.sh}} prints files with violations using full path, eg:

{noformat:title=https://github.com/elek/ozone-ci/blob/master/trunk/trunk-nightly-20190920-4x9x8/checkstyle/summary.txt}
...
/workdir/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadList.java
 23: Unused import - org.apache.hadoop.hdds.client.ReplicationType.
 24: Unused import - 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor.
/workdir/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadListParts.java
 23: Unused import - 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType.
/workdir/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartKeyInfo.java
 19: Unused import - org.apache.hadoop.hdds.client.ReplicationFactor.
 20: Unused import - org.apache.hadoop.hdds.client.ReplicationType.
 26: Unused import - java.time.Instant.
...
{noformat}

{{/workdir}} is specific to the CI environment.  Similarly, local checkout 
directory is specific to each developer.

Printing only path relative to project root ({{/workdir}} here) would make 
handling these paths easier (eg. reporting errors in JIRA or opening files 
locally for editing).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1982) Extend SCMNodeManager to support decommission and maintenance states

2019-09-20 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1982.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~sodonnell] Thank you for the contribution. I have committed this patch to the 
HDDS-1880-Decom branch.

> Extend SCMNodeManager to support decommission and maintenance states
> 
>
> Key: HDDS-1982
> URL: https://issues.apache.org/jira/browse/HDDS-1982
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Currently, within SCM a node can have the following states:
> HEALTHY
> STALE
> DEAD
> DECOMMISSIONING
> DECOMMISSIONED
> The last 2 are not currently used.
> In order to support decommissioning and maintenance mode, we need to extend 
> the set of states a node can have to include decommission and maintenance 
> states.
> It is also important to note that a node decommissioning or entering 
> maintenance can also be HEALTHY, STALE or go DEAD.
> Therefore in this Jira I propose we should model a node state with two 
> different sets of values. The first, is effectively the liveliness of the 
> node, with the following states. This is largely what is in place now:
> HEALTHY
> STALE
> DEAD
> The second is the node operational state:
> IN_SERVICE
> DECOMMISSIONING
> DECOMMISSIONED
> ENTERING_MAINTENANCE
> IN_MAINTENANCE
> That means the overall total number of states for a node is the cross-product 
> of the two above lists, however it probably makes sense to keep the two 
> states seperate internally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14865) Reduce Synchronization in DatanodeManager

2019-09-20 Thread David Mollitor (Jira)
David Mollitor created HDFS-14865:
-

 Summary: Reduce Synchronization in DatanodeManager
 Key: HDFS-14865
 URL: https://issues.apache.org/jira/browse/HDFS-14865
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.2.0
Reporter: David Mollitor
Assignee: David Mollitor






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2158) Fix Json Injection in JsonUtils

2019-09-20 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-2158:


 Summary: Fix Json Injection in JsonUtils
 Key: HDDS-2158
 URL: https://issues.apache.org/jira/browse/HDDS-2158
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


JsonUtils#toJsonStringWithDefaultPrettyPrinter() does not validate the Json 
String  before serializing it which could result in Json Injection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14866) NameNode stopRequested is Marked volatile

2019-09-20 Thread David Mollitor (Jira)
David Mollitor created HDFS-14866:
-

 Summary: NameNode stopRequested is Marked volatile
 Key: HDFS-14866
 URL: https://issues.apache.org/jira/browse/HDFS-14866
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.2.0
Reporter: David Mollitor
Assignee: David Mollitor


https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java#L405

"Used for testing" so not a big deal, but it's a bit odd that it's scoped as 
'protected' and is not 'volatile'.  It could be accessed outside of a lock and 
getting a bad value.  Tighten that up a little.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2159) Fix Race condition in ProfileServlet#pid

2019-09-20 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-2159:


 Summary: Fix Race condition in ProfileServlet#pid
 Key: HDDS-2159
 URL: https://issues.apache.org/jira/browse/HDDS-2159
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


There is a race condition in ProfileServlet. The Servlet member field pid 
should not be used for local assignment. It could lead to race condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2128) Make ozone sh command work with OM HA service ids

2019-09-20 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2128.

Fix Version/s: 0.5.0
   Resolution: Fixed

> Make ozone sh command work with OM HA service ids
> -
>
> Key: HDDS-2128
> URL: https://issues.apache.org/jira/browse/HDDS-2128
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Now that HDDS-2007 is committed. I can use some common helper function to 
> make this work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2160) Add acceptance test for ozonesecure-mr compose

2019-09-20 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HDDS-2160:


 Summary: Add acceptance test for ozonesecure-mr compose
 Key: HDDS-2160
 URL: https://issues.apache.org/jira/browse/HDDS-2160
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This will give us coverage of running basic MR jobs on security enabled OZONE 
cluster against YARN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable

2019-09-20 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2161:
---

 Summary: Create RepeatedKeyInfo structure to be saved in 
deletedTable
 Key: HDDS-2161
 URL: https://issues.apache.org/jira/browse/HDDS-2161
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


Currently, OM Metadata deletedTable stores 

When a user deletes a Key,  is moved to deletedTable.

If a user creates and deletes key with exact same name in quick succession 
repeatedly, then old  can get overwritten and we may be 
left with dangling blocks.

To address this, currently we append delete timestamp to keyname and preserve 
the multiple delete attempts for same key name.

However, for GDPR compliance we need a way to check if a key is deleted from 
deletedTable and thus given the above explanation, we may not get accurate 
information and it must also confuse the users.

 

This Jira aims to:
 # Create new structure RepeatedKeyInfo which allows us to group multiple 
KeyInfo which can be saved to deletedTable corresponding to a keyname as 

 # Due to this, before we move a key to deletedTable, we need to check if key 
with same name exists. If yes, then fetch the existing instance and add the 
latest key to the list, store it back to deletedTable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2162) Make KeyTab configuration support HA config

2019-09-20 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2162:


 Summary: Make KeyTab configuration support HA config
 Key: HDDS-2162
 URL: https://issues.apache.org/jira/browse/HDDS-2162
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


To have a single configuration to use across OM cluster, few of the configs 
like 

OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,

OZONE_OM_KERBEROS_PRINCIPAL_KEY,

OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,

OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append with 
service id and node id.

 

This Jira is to fix the above configs.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2163) Add "Replication factor" to the output of list keys

2019-09-20 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2163:


 Summary: Add "Replication factor" to the output of list keys 
 Key: HDDS-2163
 URL: https://issues.apache.org/jira/browse/HDDS-2163
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone CLI
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


The output of "ozone sh key list /vol1/bucket1" does not include replication 
factor and it will be good to have it in the output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-20 Thread Duo Zhang
I think this one is alread in place so we have to upgrade...

https://issues.apache.org/jira/browse/HADOOP-16557

Wangda Tan  于2019年9月21日周六 上午7:19写道:

> Hi Vinay,
>
> A bit confused, I saw the HADOOP-13363 is still pending. Do we need to
> upgrade protobuf version to 3.7.1 NOW or once HADOOP-13363 is completed?
>
> Thanks,
> Wangda
>
> On Fri, Sep 20, 2019 at 8:11 AM Vinayakumar B 
> wrote:
>
> > Hi All,
> >
> > A very long pending task, protobuf upgrade is happening in HADOOP-13363.
> As
> > part of that protobuf version is upgraded to 3.7.1.
> >
> > Please update your build environments to have 3.7.1 protobuf version.
> >
> > BUILIDING.txt has been updated with latest instructions.
> >
> > This pre-requisite to update protoc dependecy manually is required until
> > 'hadoop-maven-plugin' is replaced with 'protobuf-mavem-plugin' to
> > dynamically resolve required protoc exe.
> >
> > Dockerfile is being updated to have latest 3.7.1 as default protoc for
> test
> > environments.
> >
> > Thanks,
> > -Vinay
> >
>


Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/451/

[Sep 20, 2019 4:59:14 PM] (ekrogen) HADOOP-16581. Revise ValueQueue to 
correctly replenish queues that go

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-20 Thread Vinayakumar B
@Wangda Tan  ,
Sorry for the confusion. HADOOP-13363 is umbrella jira to track multiple
stages of protobuf upgrade in subtasks. (jar upgrade, Docker update, plugin
upgrade, shading, etc).
Right now, first task of jar upgrade is done. So need to update the protoc
executable in the in build environments.

@张铎(Duo Zhang)  ,
Sorry for the inconvenience. Yes, indeed plugin update before jar upgrde
was possible. Sorry I missed it.

Plugin update needed to be done for whole project, for which precommit
jenkins will need more time complete end-to-end runs.
So plugin update is planned in stages in further subtasks. It could be done
in 2-3 days.

-Vinay

On Sat, 21 Sep 2019, 5:55 am 张铎(Duo Zhang),  wrote:

> I think this one is alread in place so we have to upgrade...
>
> https://issues.apache.org/jira/browse/HADOOP-16557
>
> Wangda Tan  于2019年9月21日周六 上午7:19写道:
>
> > Hi Vinay,
> >
> > A bit confused, I saw the HADOOP-13363 is still pending. Do we need to
> > upgrade protobuf version to 3.7.1 NOW or once HADOOP-13363 is completed?
> >
> > Thanks,
> > Wangda
> >
> > On Fri, Sep 20, 2019 at 8:11 AM Vinayakumar B 
> > wrote:
> >
> > > Hi All,
> > >
> > > A very long pending task, protobuf upgrade is happening in
> HADOOP-13363.
> > As
> > > part of that protobuf version is upgraded to 3.7.1.
> > >
> > > Please update your build environments to have 3.7.1 protobuf version.
> > >
> > > BUILIDING.txt has been updated with latest instructions.
> > >
> > > This pre-requisite to update protoc dependecy manually is required
> until
> > > 'hadoop-maven-plugin' is replaced with 'protobuf-mavem-plugin' to
> > > dynamically resolve required protoc exe.
> > >
> > > Dockerfile is being updated to have latest 3.7.1 as default protoc for
> > test
> > > environments.
> > >
> > > Thanks,
> > > -Vinay
> > >
> >
>


[jira] [Resolved] (HDDS-2163) Add "Replication factor" to the output of list keys

2019-09-20 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2163.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Add "Replication factor" to the output of list keys 
> 
>
> Key: HDDS-2163
> URL: https://issues.apache.org/jira/browse/HDDS-2163
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone CLI
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The output of "ozone sh key list /vol1/bucket1" does not include replication 
> factor and it will be good to have it in the output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org