[jira] [Created] (HDDS-1164) Add New blockade Tests to test Replica Manager

2019-02-22 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-1164:


 Summary: Add New blockade Tests to test Replica Manager
 Key: HDDS-1164
 URL: https://issues.apache.org/jira/browse/HDDS-1164
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nilotpal Nandi






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-02-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/

[Feb 21, 2019 10:35:19 PM] (eyang) Revert "HADOOP-13707. Skip authorization for 
anonymous user to access
[Feb 21, 2019 10:36:02 PM] (eyang) Revert "HADOOP-13707. Skip authorization for 
anonymous user to access
[Feb 21, 2019 10:36:59 PM] (eyang) Revert "HADOOP-13707. Skip authorization for 
anonymous user to access




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to state in 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At 
FSImageFormatPBINode.java:org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At FSImageFormatPBINode.java:[line 623] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.tools.TestDistCpSystem 
   hadoop.mapred.gridmix.TestGridmixSubmission 
   hadoop.yarn.sls.nodemanager.TestNMSimulator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/xml.txt
  [8.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/240/artifact/

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-02-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/

[Feb 21, 2019 7:59:01 AM] (bibinchundatt) YARN-8132. Final Status of 
applications shown as UNDEFINED in ATS app
[Feb 21, 2019 9:21:21 AM] (wwei) YARN-9315. TestCapacitySchedulerMetrics fails 
intermittently.
[Feb 21, 2019 1:04:18 PM] (elek) HDDS-1129. Fix findbug/checkstyle errors hdds 
projects. Contributed by
[Feb 21, 2019 2:08:46 PM] (stevel) HADOOP-16105. WASB in secure mode does not 
set connectingUsingSAS.
[Feb 21, 2019 3:06:34 PM] (surendralilhore) HDFS-14216. NullPointerException 
happens in NamenodeWebHdfs. Contributed
[Feb 21, 2019 3:36:15 PM] (aw) HADOOP-16035. Jenkinsfile for Hadoop
[Feb 21, 2019 4:07:08 PM] (nanda) HDDS-1126. Datanode is trying to qausi-close 
a container which is
[Feb 21, 2019 4:18:07 PM] (wwei) YARN-9258. Support to specify allocation tags 
without constraint in
[Feb 21, 2019 7:17:32 PM] (wangda) YARN-9319. Fix compilation issue of handling 
typedef an existing name by
[Feb 21, 2019 9:29:10 PM] (github) HDDS-1141. Update DBCheckpointSnapshot to 
DBCheckpoint.
[Feb 21, 2019 9:57:05 PM] (bharat) HDDS-1161. Disable failing test which are 
tracked by a separated jira.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.util.TestBasicDiskValidator 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.mapred.TestJobCounters 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/diff-patch-pylint.txt
  [144K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1055/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]
   
https://builds.ap

[jira] [Created] (HDFS-14312) Scale test KMS using kms audit log

2019-02-22 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-14312:
--

 Summary: Scale test KMS using kms audit log
 Key: HDFS-14312
 URL: https://issues.apache.org/jira/browse/HDFS-14312
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: kms
Affects Versions: 3.3.0
Reporter: Wei-Chiu Chuang


It appears to me that Dynamometer's architecture allows KMS scale tests too.

I imagine there are two ways to scale test a KMS.
# Take KMS audit logs, and replay the logs against a KMS.
# Configure Dynamometer to start KMS in addition to NameNode. Assuming the 
fsimage comes from an encrypted cluster, replaying HDFS audit log also tests 
KMS.

It would be even more interesting to have a tool that converts uncrypted 
cluster fsimage to an encrypted one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1165) Document generation in maven should be configured on execition level

2019-02-22 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1165:
--

 Summary: Document generation in maven should be configured on 
execition level 
 Key: HDDS-1165
 URL: https://issues.apache.org/jira/browse/HDDS-1165
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


Documentation of Ozone/Hdds project is generated from maven with the help of 
maven exec plugin.

There are multiple ways to configure plugins in maven. Plugin can be configured 
on plugin level:
{code:java}

    org.codehaus.mojo
    exec-maven-plugin
    1.6.0
    
  
    
  exec
    
    compile
  
    
    
  ...
    
  
{code}
In this case not only the specific execution but all the execution will be 
configured (even if it's triggered by mvn exec:exec)

Or it can be configured on the execution level:
{code:java}

    org.codehaus.mojo
    exec-maven-plugin
    1.6.0
    
  
    
  exec
    
    compile
 
 ...
     
  
    
    
  {code}
In this case the configuration is valid only for this specific execution which 
is bound to a specific phase (compile in this case)

Unfortunately it's configured in the wrong way in hadoop-hdds/docs/pom.xml: the 
first approach should be replaced with the second with moving the configuration 
to inside the execution.

Without this change yetus can't detect the dependency order.

How to test:

The easiest way to reproduce the problem is to execute:
{code:java}
mvn  -fae exec:exec -Dexec.executable=pwd -Dexec.args='' -Phdds{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap instead of df/du

2019-02-22 Thread Lisheng Sun (JIRA)
Lisheng Sun created HDFS-14313:
--

 Summary: Get hdfs used space from FsDatasetImpl#volumeMap instead 
of df/du
 Key: HDFS-14313
 URL: https://issues.apache.org/jira/browse/HDFS-14313
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, performance
Affects Versions: 3.0.0-alpha1, 2.8.0
Reporter: Lisheng Sun


There are two ways of DU/DF getting used space that are insufficient.
 #  Running DU across lots of disks is very expensive and running all of the 
processes at the same time creates a noticeable IO spike.
 #  Running DF is inaccurate when the disk sharing by multiple datanode or 
other servers.

 Getting hdfs used space from  FsDatasetImpl#volumeMap is very small and 
accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14314) fullBlockReportLeaseId should be reset after

2019-02-22 Thread star (JIRA)
star created HDFS-14314:
---

 Summary: fullBlockReportLeaseId should be reset after 
 Key: HDFS-14314
 URL: https://issues.apache.org/jira/browse/HDFS-14314
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.8.0
 Environment:  

 

 
Reporter: star


  since HDFS-7923 ,to rate-limit DN block report, DN will ask for a full 
block lease id from active NN before sending full block to NN. Then DN will 
send full block report together with lease id. If the lease id is invalid, NN 
will reject the full block report and log "not in the pending set".

  In a case when DN is doing full block reporting while NN is restarted. It 
happens that DN will later send a full block report with lease id ,acquired 
from previous NN instance, which is invalid to the new NN instance. Though DN 
recognized the new NN instance by heartbeat and reregister itself, it did not 
reset the lease id from previous instance.

  The issuse may cause DNs to temporarily go dead, making it unsafe to 
restart NN especially in hadoop cluster which has large amount of DNs.

   To make it clear, look at code below. We take it from method 
offerService of class BPServiceActor. We eliminate some code to focus on 
current issue. fullBlockReportLeaseId is a local variable. Exceptions will 
occur at blockReport call when NN restarting, which will be caught by catch 
block in while loop. Thus fullBlockReportLeaseId will not be set to 0. After NN 
restarted, DN will send full block report which will be rejected by the new NN 
instance. DN will never send full block report until the next full block report 
schedule, about an hour later. 

  Solution is simple, just reset fullBlockReportLeaseId to 0 after any 
exception or after registering to NN. Thus it will ask for a valid 
fullBlockReportLeaseId from new NN instance.
{code:java}
private void offerService() throws Exception {

  long fullBlockReportLeaseId = 0;

  //
  // Now loop for a long time
  //
  while (shouldRun()) {
try {
  final long startTime = scheduler.monotonicNow();

  //
  // Every so often, send heartbeat or block-report
  //
  final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime);
  HeartbeatResponse resp = null;
  if (sendHeartbeat) {
  
boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
scheduler.isBlockReportDue(startTime);
scheduler.scheduleNextHeartbeat();
if (!dn.areHeartbeatsDisabledForTests()) {
  resp = sendHeartBeat(requestBlockReportLease);
  assert resp != null;
  if (resp.getFullBlockReportLeaseId() != 0) {
if (fullBlockReportLeaseId != 0) {
  LOG.warn(nnAddr + " sent back a full block report lease " +
  "ID of 0x" +
  Long.toHexString(resp.getFullBlockReportLeaseId()) +
  ", but we already have a lease ID of 0x" +
  Long.toHexString(fullBlockReportLeaseId) + ". " +
  "Overwriting old lease ID.");
}
fullBlockReportLeaseId = resp.getFullBlockReportLeaseId();
  }
 
}
  }
   
 
  if ((fullBlockReportLeaseId != 0) || forceFullBr) {
//Exception occurred here when NN restarting
cmds = blockReport(fullBlockReportLeaseId);
fullBlockReportLeaseId = 0;
  }
  
} catch(RemoteException re) {
  
  } // while (shouldRun())
} // offerService{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org