[jira] [Created] (HDFS-11285) Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never transition to (Dead, DECOMMISSIONED)

2017-01-03 Thread Lantao Jin (JIRA)
Lantao Jin created HDFS-11285:
-

 Summary: Dead DataNodes keep a long time in (Dead, 
DECOMMISSION_INPROGRESS), and never transition to (Dead, DECOMMISSIONED)
 Key: HDFS-11285
 URL: https://issues.apache.org/jira/browse/HDFS-11285
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.1
Reporter: Lantao Jin


We have seen the use case of decommissioning DataNodes that are already dead or 
unresponsive, and not expected to rejoin the cluster. In a large cluster, we 
met more than 100 nodes were dead, decommissioning and their {panel} Under 
replicated blocks {panel} {panel} Blocks with no live replicas {panel} were all 
ZERO. Actually It has been fixed in 
[HDFS-7374|https://issues.apache.org/jira/browse/HDFS-7374]. After that, we can 
refreshNode twice to eliminate this case. But, seems this patch missed after 
refactor[HDFS-7411|https://issues.apache.org/jira/browse/HDFS-7411]. We are 
using a Hadoop version based 2.7.1 and only below operations can transition the 
status from {panel} Dead, DECOMMISSION_INPROGRESS {panel} to {panel} Dead, 
DECOMMISSIONED {panel}:
# Retire it from hdfs-exclude
# refreshNodes
# Re-add it to hdfs-exclude
# refreshNodes

So, why the code removed after refactor in the new DecommissionManager?
{code:java}
if (!node.isAlive) {
  LOG.info("Dead node " + node + " is decommissioned immediately.");
  node.setDecommissioned();
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11286) GETFILESTATUS, RENAME logic breaking due to incomplete path argument

2017-01-03 Thread Sampada Dehankar (JIRA)
Sampada Dehankar created HDFS-11286:
---

 Summary: GETFILESTATUS, RENAME  logic breaking due to incomplete 
path argument
 Key: HDFS-11286
 URL: https://issues.apache.org/jira/browse/HDFS-11286
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs
Affects Versions: 2.7.1
 Environment: Windows
Reporter: Sampada Dehankar


We use ADLS to store customer data and to access the data from our containers  
HttpFS Server-Client is used.  HttpFS functions like GETFILESTATUS, RENAME 
expect absolute 'path' of the file(s) as the argument. But when the request is 
received at the server from HttpFs Client, the server is forwarding only the 
relative path rather than absolute path to ADLS. This is breaking the logic for 
GETFILESTATUS, RENAME functions.  
 
Steps to reproduce GETFILESTATUS Command Bug: 
 
Run the following command from the client: 
 
Example 1: 
hadoop fs –ls adl_scheme://account/folderA/folderB/ 
Server logs show only the relative path "folderA/folderB/"   is forwarded to 
ADLS. 
 
Example 2: 
hadoop fs –ls adl_scheme://account/folderX/folderY/SampleFile 
Server logs show only the relative path "folderX/folderY/SampleFile"   is 
forwarded to ADLS. 
 
Fix: 
Prepend the ADLS scheme and account name to the path. So the path in example 1 
and example 2 would look like this 'adl_scheme://account/folderA/folderB/'  and 
'adl_scheme://account/folderX/folderY/SampleFile' respectively. We have the fix 
ready and currently it is in the testing phase. 
 
Steps to reproduce RENAME Command Bug: 
 
Run the following command from the client: 
 
Example 1: 
Hadoop fs –mv /folderA/oldFileName /folderA/newFileName 
 
Server logs show only the relative old file path "folderA/oldFileName" and new  
File path "adl_scheme://account/folderA/newFileName" is forwarded to ADLS. 
 
Fix: 
 
Prepend the ADLS scheme and account name to the old file name path. We have the 
fix ready and currently it is in the testing phase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11280) Allow WebHDFS to reuse HTTP connections to NN

2017-01-03 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reopened HDFS-11280:
-

> Allow WebHDFS to reuse HTTP connections to NN
> -
>
> Key: HDFS-11280
> URL: https://issues.apache.org/jira/browse/HDFS-11280
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Zheng Shao
>Assignee: Zheng Shao
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-11280.for.2.7.and.below.patch, 
> HDFS-11280.for.2.8.and.beyond.2.patch, HDFS-11280.for.2.8.and.beyond.3.patch, 
> HDFS-11280.for.2.8.and.beyond.4.patch, HDFS-11280.for.2.8.and.beyond.patch
>
>
> WebHDFSClient calls "conn.disconnect()", which disconnects from the NameNode. 
>  When we use webhdfs as the source in distcp, this used up all ephemeral 
> ports on the client side since all closed connections continue to occupy the 
> port with TIME_WAIT status for some time.
> According to http://tinyurl.com/java7-http-keepalive, we should call 
> conn.getInputStream().close() instead to make sure the connection is kept 
> alive.  This will get rid of the ephemeral port problem.
> Manual steps used to verify the bug fix:
> 1. Build original hadoop jar.
> 2. Try out distcp from webhdfs as source, and "netstat -n | grep TIME_WAIT | 
> grep -c 50070" on the local machine shows a big number (100s).
> 3. Build hadoop jar with this diff.
> 4. Try out distcp from webhdfs as source, and "netstat -n | grep TIME_WAIT | 
> grep -c 50070" on the local machine shows 0.
> 5. The explanation:  distcp's client side does a lot of directory scanning, 
> which would create and close a lot of connections to the namenode HTTP port.
> Reference:
> 2.7 and below: 
> https://github.com/apache/hadoop/blob/branch-2.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L743
> 2.8 and above: 
> https://github.com/apache/hadoop/blob/branch-2.8/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L898



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-01-03 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/

No changes




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.namenode.TestAuditLogs 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsTokens 
   hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter 
   hadoop.hdfs.server.namenode.TestBackupNode 
   hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.web.TestWebHDFSXAttr 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/diff-compile-javac-root.txt
  [164K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/diff-patch-shellcheck.txt
  [28K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [208K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [316K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/275/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Resolved] (HDFS-4169) Add per-disk latency metrics to DataNode

2017-01-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDFS-4169.
--
  Resolution: Duplicate
   Fix Version/s: 3.0.0-alpha2
Target Version/s:   (was: )

> Add per-disk latency metrics to DataNode
> 
>
> Key: HDFS-4169
> URL: https://issues.apache.org/jira/browse/HDFS-4169
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
>
> Currently, if one of the drives on the DataNode is slow, it's hard to 
> determine what the issue is. This can happen due to a failing disk, bad 
> controller, etc. It would be preferable to expose per-drive metrics/jmx with 
> latency statistics about how long reads/writes are taking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11287) Storage class member storageDirs should be private to avoid unprotected access by derived classes

2017-01-03 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11287:
-

 Summary: Storage class member storageDirs should be private to 
avoid unprotected access by derived classes
 Key: HDFS-11287
 URL: https://issues.apache.org/jira/browse/HDFS-11287
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


HDFS-11267 fix made the abstract class Storage.java member variable storageDirs 
a thread safe one so that all its derived classes like NNStorage, JNStorage, 
DataStorage will not face any ConcurrentModificationException when there are 
volume add/remove and listing operations running in parallel. The fix rebase 
missed out few changers to the original patch. This jira is to address the 
addendum needed for the HDFS-11267 commits. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11288) Manually allow block replication/deletion in Safe Mode

2017-01-03 Thread Lukas Majercak (JIRA)
Lukas Majercak created HDFS-11288:
-

 Summary: Manually allow block replication/deletion in Safe Mode
 Key: HDFS-11288
 URL: https://issues.apache.org/jira/browse/HDFS-11288
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Affects Versions: 3.0.0-alpha1
Reporter: Lukas Majercak


Currently, the Safe Mode does not allow block replication/deletion, which makes 
sense, especially on startup, as we do not want to replicate blocks 
unnecessarily. 

An issue we have seen in our clusters though, is when the NameNode is getting 
overwhelmed with the amounts of needed replications; in which case, we would 
like to be able to manually set the NN to be in a state in which R/Ws to FS are 
disallowed but the NN continues replicating/deleting blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11289) Make SPS movement monitor timeouts configurable

2017-01-03 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-11289:
--

 Summary: Make SPS movement monitor timeouts configurable
 Key: HDFS-11289
 URL: https://issues.apache.org/jira/browse/HDFS-11289
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-10285
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


Currently SPS tracking monitor timeouts were hardcoded. This is the JIRA for 
making it configurable.

{code}
 // TODO: below selfRetryTimeout and checkTimeout can be configurable later
// Now, the default values of selfRetryTimeout and checkTimeout are 30mins
// and 5mins respectively
this.storageMovementsMonitor = new BlockStorageMovementAttemptedItems(
5 * 60 * 1000, 30 * 60 * 1000, storageMovementNeeded);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Release cadence and EOL

2017-01-03 Thread Sangjin Lee
Happy new year!

I think this topic has aged quite a bit in the discussion thread. Should we
take it to a vote? Do we need additional discussions?

Regards,
Sangjin

On Wed, Nov 9, 2016 at 11:11 PM, Karthik Kambatla 
wrote:

> Fair points, Sangjin and Andrew.
>
> To get the ball rolling on this, I am willing to try the proposed policy.
>
> On Fri, Nov 4, 2016 at 12:09 PM, Andrew Wang 
> wrote:
>
> > I'm certainly willing to try this policy. There's definitely room for
> > improvement when it comes to streamlining the release process. The
> > create-release script that Allen wrote helps, but there are still a lot
> of
> > manual steps in HowToRelease for staging and publishing a release.
> >
> > Another perennial problem is reconciling git log with the changes and
> > release notes and JIRA information. I think each RM has written their own
> > scripts for this, but it could probably be automated into a Jenkins
> report.
> >
> > And the final problem is that branches are often not in a releasable
> > state. This is because we don't have any upstream integration testing.
> For
> > instance, testing with 3.0.0-alpha1 has found a number of latent
> > incompatibilities in the 2.8.0 branch. If we want to meaningfully speed
> up
> > the minor release cycle, continuous integration testing is a must.
> >
> > Best,
> > Andrew
> >
> > On Fri, Nov 4, 2016 at 10:33 AM, Sangjin Lee  wrote:
> >
> >> Thanks for your thoughts and more data points Andrew.
> >>
> >> I share your concern that the proposal may be more aggressive than what
> >> we have been able to accomplish so far. I'd like to hear from the
> community
> >> what is a desirable release cadence which is still within the realm of
> the
> >> possible.
> >>
> >> The EOL policy can also be a bit of a forcing function. By having a
> >> defined EOL, hopefully it would prod the community to move faster with
> >> releases. Of course, automating releases and testing should help.
> >>
> >>
> >> On Tue, Nov 1, 2016 at 4:31 PM, Andrew Wang 
> >> wrote:
> >>
> >>> Thanks for pushing on this Sangjin. The proposal sounds reasonable.
> >>>
> >>> However, for it to have teeth, we need to be *very* disciplined about
> the
> >>> release cadence. Looking at our release history, we've done 4
> maintenance
> >>> releases in 2016 and no minor releases. 2015 had 4 maintenance and 1
> >>> minor
> >>> release. The proposal advocates for 12 maintenance releases and 2
> minors
> >>> per year, or about 3.5x more releases than we've historically done. I
> >>> think
> >>> achieving this will require significantly streamlining our release and
> >>> testing process.
> >>>
> >>> For some data points, here are a few EOL lifecycles for some major
> >>> projects. They talk about support in terms of time (not number of
> >>> releases), and release on a cadence.
> >>>
> >>> Ubuntu maintains LTS for 5 years:
> >>> https://www.ubuntu.com/info/release-end-of-life
> >>>
> >>> Linux LTS kernels have EOLs ranging from 2 to 6 years, though it seems
> >>> only
> >>> one has actually ever been EOL'd:
> >>> https://www.kernel.org/category/releases.html
> >>>
> >>> Mesos supports minor releases for 6 months, with a new minor every 2
> >>> months:
> >>> http://mesos.apache.org/documentation/latest/versioning/
> >>>
> >>> Eclipse maintains each minor for ~9 months before moving onto a new
> >>> minor:
> >>> http://stackoverflow.com/questions/35997352/how-to-determine
> >>> -end-of-life-for-eclipse-versions
> >>>
> >>>
> >>>
> >>> On Fri, Oct 28, 2016 at 10:55 AM, Sangjin Lee 
> wrote:
> >>>
> >>> > Reviving an old thread. I think we had a fairly concrete proposal on
> >>> the
> >>> > table that we can vote for.
> >>> >
> >>> > The proposal is a minor release on the latest major line every 6
> >>> months,
> >>> > and a maintenance release on a minor release (as there may be
> >>> concurrently
> >>> > maintained minor releases) every 2 months.
> >>> >
> >>> > A minor release line is EOLed 2 years after it is first released or
> >>> there
> >>> > are 2 newer minor releases, whichever is sooner. The community
> >>> reserves the
> >>> > right to extend or shorten the life of a release line if there is a
> >>> good
> >>> > reason to do so.
> >>> >
> >>> > Comments? Objections?
> >>> >
> >>> > Regards,
> >>> > Sangjin
> >>> >
> >>> >
> >>> > On Tue, Aug 23, 2016 at 9:33 AM, Karthik Kambatla <
> ka...@cloudera.com>
> >>> > wrote:
> >>> >
> >>> > >
> >>> > >> Here is just an idea to get started. How about "a minor release
> >>> line is
> >>> > >> EOLed 2 years after it is released or there are 2 newer minor
> >>> releases,
> >>> > >> whichever is sooner. The community reserves the right to extend or
> >>> > shorten
> >>> > >> the life of a release line if there is a good reason to do so."
> >>> > >>
> >>> > >>
> >>> > > Sounds reasonable, especially for our first commitment. For current
> >>> > > releases, this essentially means 2.6.x is maintained until Nov 2016
> >>> and
> >>> > Apr
> >>> > > 2017 if 2.8 and 2.9 are no

Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-01-03 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/206/

No changes




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestSymlinkLocalFSFileSystem 
   hadoop.fs.TestTrash 
   hadoop.fs.TestSymlinkLocalFSFileContext 
   hadoop.ipc.TestCallQueueManager 
   hadoop.ipc.TestRPCWaitForProxy 
   hadoop.ipc.TestProtoBufRpc 
   hadoop.ipc.TestIPC 
   hadoop.security.TestGroupsCaching 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.TestDFSInotifyEventInputStream 
   hadoop.net.TestNetworkTopology 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 
   hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork 
   hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter 
   hadoop.hdfs.tools.TestDFSZKFailoverController 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes 
   hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.web.TestWebHdfsTokens 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.namenode.TestAuditLogs 
   hadoop.hdfs.web.TestWebHDFSXAttr 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestSchedulingPolicy 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapreduce.v2.TestMRJobsWithProfiler 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.hdfs.TestNNBench 
   hadoop.mapreduce.lib.join.TestJoinProperties 

Timed out junit tests :

   org.apache.hadoop.hdfs.TestFileChecksum 
   
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStorePerf 
   org.apache.hadoop.mapreduce.v2.TestUberAM 
   org.apache.hadoop.mapreduce.v2.TestMRJobs 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/206/artifact/out/patch-compile-root.txt
  [124K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/206/artifact/out/patch-compile-root.txt
  [124K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/206/artifact/out/patch-compile-root.txt
  [124K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/206/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [204K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/206/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [2.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/206/artifact/out/patch-unit-hado

[jira] [Created] (HDFS-11290) TestFSNameSystemMBean should wait until the cache is cleared

2017-01-03 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDFS-11290:


 Summary: TestFSNameSystemMBean should wait until the cache is 
cleared
 Key: HDFS-11290
 URL: https://issues.apache.org/jira/browse/HDFS-11290
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 2.8.0
Reporter: Akira Ajisaka


TestFSNamesystemMBean#testWithFSNamesystemWriteLock and #testWithFSEditLogLock 
get metrics after locking FSNameSystem/FSEditLog, but when the metrics are 
cached, the tests success even if the metrics acquire the locks. The tests 
should wait until the cache is cleared.
This issue was reported by [~xkrogen] in HDFS-11180.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org