[jira] [Created] (HDDS-352) Separate install and testing phases in acceptance tests.

2018-08-16 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-352:
-

 Summary: Separate install and testing phases in acceptance tests.
 Key: HDDS-352
 URL: https://issues.apache.org/jira/browse/HDDS-352
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


In the current acceptance tests (hadoop-ozone/acceptance-test) the robot files 
contain two kind of commands:

1) starting and stopping clusters
2) testing the basic behaviour with client calls

It would be great to separate the two functionality and include only the 
testing part in the robot files.

1. Ideally the tests could be executed in any environment. After a kubernetes 
install I would like to do a smoke test. It could be a different environment 
but I would like to execute most of the tests (check ozone cli, rest api, etc.)

2. There could be multiple ozone environment (standlaone ozone cluster, hdfs + 
ozone cluster, etc.). We need to test all of them with all the tests.

3. With this approach we can collect the docker-compose files just in one place 
(hadoop-dist project). After a docker-compose up there should be a way to 
execute the tests with an existing cluster. Something like this:

{code}
docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test 
-e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh
{code}

4. It also means that we need to execute the tests from a separated container 
instance. We need a configuration parameter to define the cluster topology. 
Ideally it could be just one environment variables with the url of the scm and 
the scm could be used to discovery all of the required components + download 
the configuration files from there.

5. Until now we used the log output of the docker-compose files to do some 
readiness probes. They should be converted to poll the jmx endpoints and check 
if the cluster is up and running. If we need the log files for additional 
testing we can create multiple implementations for different type of 
environments (docker-compose/kubernetes) and include the right set of functions 
based on an external parameters.

6. Still we need a generic script under the ozone-acceptance test project to 
run all the tests (starting the docker-compose clusters, execute tests in a 
different container, stop the cluster) 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-16 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-353:


 Summary: Multiple delete Blocks tests are failing consistetly
 Key: HDDS-353
 URL: https://issues.apache.org/jira/browse/HDDS-353
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager, SCM
Reporter: Shashikant Banerjee
 Fix For: 0.2.1


As per the test reports here:

[https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], following 
tests are failing:

1 . TestStorageContainerManager#testBlockDeletionTransactions

2. TestStorageContainerManager#testBlockDeletingThrottling

3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-08-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/

[Aug 15, 2018 4:15:54 AM] (wwei) YARN-8614. Fix few annotation typos in 
YarnConfiguration. Contributed by
[Aug 15, 2018 3:11:17 PM] (xyao) HDDS-347. 
TestCloseContainerByPipeline#testCloseContainerViaStandaAlone
[Aug 15, 2018 3:31:59 PM] (aajisaka) HADOOP-15552. Move logging APIs over to 
slf4j in hadoop-tools - Part2.
[Aug 15, 2018 4:23:05 PM] (xiao) HDFS-13217. Audit log all EC policy names 
during
[Aug 15, 2018 5:06:17 PM] (aengineer) Revert "HDDS-119:Skip Apache license 
header check for some ozone doc
[Aug 15, 2018 5:58:29 PM] (aengineer) HADOOP-15552. Addendum patch to fix the 
build break in Ozone File
[Aug 15, 2018 8:53:47 PM] (xiao) HDFS-13732. ECAdmin should print the policy 
name when an EC policy is




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/870/artifact/out/branch-fin

Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-08-16 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/560/

[Aug 15, 2018 3:11:17 PM] (xyao) HDDS-347. 
TestCloseContainerByPipeline#testCloseContainerViaStandaAlone
[Aug 15, 2018 3:31:59 PM] (aajisaka) HADOOP-15552. Move logging APIs over to 
slf4j in hadoop-tools - Part2.
[Aug 15, 2018 4:23:05 PM] (xiao) HDFS-13217. Audit log all EC policy names 
during
[Aug 15, 2018 5:06:17 PM] (aengineer) Revert "HDDS-119:Skip Apache license 
header check for some ozone doc
[Aug 15, 2018 5:58:29 PM] (aengineer) HADOOP-15552. Addendum patch to fix the 
build break in Ozone File
[Aug 15, 2018 8:53:47 PM] (xiao) HDFS-13732. ECAdmin should print the policy 
name when an EC policy is
[Aug 16, 2018 10:44:18 AM] (yqlin) HDFS-13829. Remove redundant condition 
judgement in
[Aug 16, 2018 3:06:17 PM] (jlowe) YARN-8656. container-executor should not 
write cgroup tasks files for


ERROR: File 'out/email-report.txt' does not exist

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-08-16 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-354:
---

 Summary: VolumeInfo.getScmUsed throws NPE
 Key: HDDS-354
 URL: https://issues.apache.org/jira/browse/HDDS-354
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Ajay Kumar


java.lang.NullPointerException
at 
org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
at 
org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
at 
org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
at 
org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
at 
org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
at 
org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
at 
org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-355) Disable OpenKeyDeleteService and DeleteKeysService.

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-355:
---

 Summary: Disable OpenKeyDeleteService and DeleteKeysService.
 Key: HDDS-355
 URL: https://issues.apache.org/jira/browse/HDDS-355
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: OM
Reporter: Xiaoyu Yao
Assignee: Anu Engineer
 Fix For: 0.2.1


We have identify performance issues with these two background services and will 
improve it with several followup JIRAs after this one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-356) Support ColumnFamily based RockDBStore and TableStore

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-356:
---

 Summary: Support ColumnFamily based RockDBStore and TableStore
 Key: HDDS-356
 URL: https://issues.apache.org/jira/browse/HDDS-356
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Anu Engineer


This is to minimize the performance impacts of the expensive RocksDB table scan 
problems from background services disabled by HDDS-355.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-357) Use DBStore and TableStore for OzoneManager non-background service

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-357:
---

 Summary: Use DBStore and TableStore for OzoneManager 
non-background service
 Key: HDDS-357
 URL: https://issues.apache.org/jira/browse/HDDS-357
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Anu Engineer
 Fix For: 0.2.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-358) Use DBStore and TableStore for OzoneManager background services

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-358:
---

 Summary: Use DBStore and TableStore for OzoneManager background 
services
 Key: HDDS-358
 URL: https://issues.apache.org/jira/browse/HDDS-358
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Anu Engineer
 Fix For: 0.2.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-359) RocksDB Profiles support

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-359:
---

 Summary: RocksDB Profiles support
 Key: HDDS-359
 URL: https://issues.apache.org/jira/browse/HDDS-359
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Anu Engineer
 Fix For: 0.2.1


This allows us to tune the OM/SCM DB for different machine configurations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-360:
---

 Summary: Use RocksDBStore and TableStore for SCM Metadata
 Key: HDDS-360
 URL: https://issues.apache.org/jira/browse/HDDS-360
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Anu Engineer
 Fix For: 0.2.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-361) Use DBStore and TableStore for DN metadata

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-361:
---

 Summary: Use DBStore and TableStore for DN metadata
 Key: HDDS-361
 URL: https://issues.apache.org/jira/browse/HDDS-361
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Lokesh Jain
 Fix For: 0.2.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-362) Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol

2018-08-16 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-362:
---

 Summary: Modify functions impacted by SCM chill mode in 
ScmBlockLocationProtocol
 Key: HDDS-362
 URL: https://issues.apache.org/jira/browse/HDDS-362
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar


Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13830) Backport HDFS-13141 to branch-3.0.3: WebHDFS: Add support for getting snasphottable directory list

2018-08-16 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-13830:
-

 Summary: Backport HDFS-13141 to branch-3.0.3: WebHDFS: Add support 
for getting snasphottable directory list
 Key: HDFS-13830
 URL: https://issues.apache.org/jira/browse/HDFS-13830
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 3.0.3
Reporter: Siyao Meng
Assignee: Siyao Meng


HDFS-13141 conflicts with 3.0.3 because of interface change in HdfsFileStatus.

This Jira aims to backport the getSnapshottableDirListing() to branch-3.0.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13831) Make block increment deletion number configurable

2018-08-16 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-13831:


 Summary: Make block increment deletion number configurable
 Key: HDFS-13831
 URL: https://issues.apache.org/jira/browse/HDFS-13831
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: Yiqun Lin


When NN deletes a large directory, it will hold the write lock long time. For 
improving this, we remove the blocks in a batch way. So that other waiters have 
a chance to get the lock. But right now, the batch number is a hard-coded value.
{code}
  static int BLOCK_DELETION_INCREMENT = 1000;
{code}
We can make this value configurable, so that we can control the frequency of 
other waiters to get the lock chance. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org