[jira] [Resolved] (HADOOP-18278) Do not perform a LIST call when creating a file

2022-06-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18278.
-
Target Version/s: 3.4.0
  Resolution: Duplicate

We do the check to make sure that apps don't create files over directories. if 
they do, your object store loses a lot of its "filesystemness"; list, rename 
and delete all break.

HEAD doesn't do the validation, and if you create a file with overwrite=false 
we skip that call. Sadly, parquet likes creating files with overwrite=false, it 
does HEAD and LIST, even when writing to task attempt dirs which are 
exclusively for use by single thread and will be completely deleted at the end 
of the job.

The magic committer performance issue HADOOP-17833 and its PR 
https://github.com/apache/hadoop/pull/3289 turns off all the safety checks when 
writing under __magic dirs as we know they are short lived. We don't even check 
if directories have been created under files. 

The same options are available when writing any file, as it contains
HADOOP-15460, S3A FS to add "fs.s3a.create.performance" to the builder file 
creation option set.

{code}
out = fs.createFile(new Path("s3a://bucket/subdir/output.txt")
  .opt("fs.s3a.create.performance", true)
.build();
{code}

If you use this you will get the speed up you want anywhere, but you had a 
better be confident you are not overwriting a directory. See
https://github.com/steveloughran/hadoop/blob/s3/HADOOP-17833-magic-committer-performance/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md#-s3a-specific-options

At the time of writing (june 8 2022) this PR is in critical need of review. 
Please look at the patch review it and make sure it will work for you. This 
will be your opportunity to make sure it is correct before we ship it. You are 
clearly looking at the internals of what we're doing, so your insight will be 
valued. Thanks.

> Do not perform a LIST call when creating a file
> ---
>
> Key: HADOOP-18278
> URL: https://issues.apache.org/jira/browse/HADOOP-18278
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sam Kramer
>Priority: Major
>
> Hello,
> We've noticed that when creating a file, which does not exist in S3, we see 
> an extra LIST call gets issued to see if it's a directory (i.e. if key = 
> "bar", it will issue an object list request for "bar/"). 
> Is this really necessary, shouldn't a HEAD request be sufficient to determine 
> if it actually exists or not? As we're creating 1000s of files, this is quite 
> expensive, as we're effectively doubling our costs for file creation. Curious 
> if others have experienced similar or identical issues, or if there are any 
> workarounds. 
> [https://github.com/apache/hadoop/blob/516a2a8e440378c868ddb02cb3ad14d0d879037f/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L3359-L3369]
>  
> Thanks,
> Sam



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18280) Fix typos in the definition of zstd

2022-06-08 Thread Hualong Zhang (Jira)
Hualong Zhang created HADOOP-18280:
--

 Summary: Fix typos in the definition of zstd
 Key: HADOOP-18280
 URL: https://issues.apache.org/jira/browse/HADOOP-18280
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Reporter: Hualong Zhang


There are some typos in file *hadoop-common/pom.xml* and *ZStandardCompressor.c*
hadoop-common/pom.xml:
{code:xml|title=pom.xml|borderStyle=solid}
false 
/p:RequireZstd=${require.ztsd}
{code}
The correct print as follows:
{code:xml}
false
/p:RequireZstd=${require.zstd}
{code}
ZStandardCompressor.c:
{code:c|title=ZStandardCompressor.c|borderStyle=solid}
// Load the libztsd.so from disk 
{code}
The correct print as follows:
{code:c}
// Load the libzstd.so from disk
{code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18280) Fix typos in the definition of zstd

2022-06-08 Thread Hualong Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hualong Zhang resolved HADOOP-18280.

Resolution: Duplicate

> Fix typos in the definition of zstd
> ---
>
> Key: HADOOP-18280
> URL: https://issues.apache.org/jira/browse/HADOOP-18280
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Hualong Zhang
>Priority: Trivial
>
> There are some typos in file *hadoop-common/pom.xml* and 
> *ZStandardCompressor.c*
> hadoop-common/pom.xml:
> {code:xml|title=pom.xml|borderStyle=solid}
> false 
> /p:RequireZstd=${require.ztsd}
> {code}
> The correct print as follows:
> {code:xml}
> false
> /p:RequireZstd=${require.zstd}
> {code}
> ZStandardCompressor.c:
> {code:c|title=ZStandardCompressor.c|borderStyle=solid}
> // Load the libztsd.so from disk 
> {code}
> The correct print as follows:
> {code:c}
> // Load the libzstd.so from disk
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[DISCUSS] Forthcoming Hadoop releases

2022-06-08 Thread Steve Loughran
I want to start a quick discussion on a plan for hadoop releases this
summer. I am willing to do the release manager work. Mukund and Mehakmeet
have have already volunteered to help even if they don't know that yet.

I've got two goals

   1. minor followup to 3.3.3
   2. feature release of new stuff


*Followup to 3.3.3, working title "3.3.4"*

I've a PR up on github to add those change to the 3.3.2/3.3.3 line which
have shipped elsewhere and/or we consider critical.

https://github.com/apache/hadoop/pull/4345

This is for critical data integrity/service availability patches; things
like test values we will just triage.

I can start a new release of this at the end of the week, with an RC up
next week ready for review. With the wonderful docker based build and some
extra automation I've been adding for validating releases
(validate-hadoop-client-artifacts), getting that RC out is not that
problematic; issuing git commands is the heavy lifting.

What does take effort is the testing by everybody else; the smaller the set
of changes the more this is limited to validating the artifacts and the
maven publishing.

As it is a follow up to hadoop 3.3.3 then it needs the version number
3.3.4. This raises the question "what about branch-3.3", which brings me to
the next deliverable.

*branch-3.3 => branch-3.4, targeting hadoop 3.4.0 in 3Q22*

With the 3.3.x line being maintained for critical fixes only, make the
hadoop version in branch-3.3 "hadoop-3.4.0" and release later this year.

A release schedule which is probably doable despite people taking time off
over the summer could be

   - feature complete by July/August
   - RC(s) sept/oct with goal of shipping by October


I volunteer to be release manager, albeit with critical help from
colleagues. For people who haven't worked with me on a project release
before, know that I'm fairly ruthless about getting changes in once the
branch is locked down. So get those features in now.

hadoop trunk gets its version number incremented to 3.5.0-SNAPSHOT

It's probably time we think about what a release off trunk would mean -but
t I would like to get a branch-3.3 release out rather than later.

What do people think of this? And is there anyone else willing to get
involved with the release process?

-Steve


[DISCUSS] Filesystem API shim library to assist applications still targeting previous hadoop releases.

2022-06-08 Thread Steve Loughran
I've just created an initial project "fs-api-shim" to provide controlled
access to the hadoop 3.3.3+ filesystem API calls on hadoop 3.2.0+ releases
https://github.com/steveloughran/fs-api-shim

The goal here is to make it possible for core file format libraries
(Parquet, Avro, ORC, Arrow etc) and other apps (HBase, ...) to take
advantage of those APIs which we have updated and optimised for access to
cloud stores. Currently the applications do not and are under performance
on recent releases. I have the ability to change our internal forks but I
would like to let others gain from the changes and avoid having to diverge
i'll internal libraries too much.

Currently too many libraries seen frozen in time

Avro: still rejecting changes which don't compile on hadoop 2
https://github.com/apache/avro/pull/1431

Parquet: still using reflection to access non hadoop 1.x filesystem API
calls
https://github.com/apache/parquet-mr/pull/971

I'm not going to support hadoop 2.10 —but we can at least say "move up to
hadoop 3.2.x and we will let you use later APIs when available"

some calls, like openFile() will work everywhere; on versions with the open
file builder API they will take the final status and fake policy so let
libraries declare whether they are random/sequential is IO and skip those
HEAD requests on the object stores they do to verify that the file exists
and determine its length for the ranged GET call requests which will follow.

https://github.com/steveloughran/fs-api-shim/blob/main/fs-api-shim-library/src/main/java/org/apache/hadoop/fs/shim/FileSystemShim.java#L38

On Hadoop 3.2.x, or if openFile() fails for some reason, it will just
downgrade to the classic open() call.

Other API calls we can support dynamic binding to through reflection but
not actually fallback if they are unavailable. This will allow libraries to
use the API calls if present but force them to come up with alternative
solutions if not.

A key part of this is FSDataInputStream, where the ByteBufferReadable API
would be benefit to Parquet

https://github.com/steveloughran/fs-api-shim/blob/main/fs-api-shim-library/src/main/java/org/apache/hadoop/fs/shim/FSDataInputStreamShim.java

When we get the vectored IO feature branch in, we can offer similar
reflection-based access. It means applications can compile on hadoop 3.2.x
and 3.3.x but still take advantage of the APIs when they are on a version
without it.

I'm going to stay clear of more complicated APIs which don't offer tangible
performance gains and which are very hard to do (IOStatistics).

Testing is fun; I have a plan there which consists of FS contract tests in
the shim test source tree to verify the 3.2.0 functionality and an adjacent
module which will run those same tests against more recent versions. I need
test will have to beat targetable against objects doors as well as local
and mini HGFS for systems

This is all in github; however it is very much a hadoop extension library.
Is there a way we could release it as an ASF Library but on a different
timetable from normal Hadoop releases? There is always incubator, but this
is such a minor project it is closer to the org.apache.hadoop.thirdparty
library in that it is something all current committers okay should be able
to commit to and release, while releasing on a schedule independent of
hadoop releases themselves. Having it come from this project should give it
more legitimacy.

Steve


[GitHub] [hadoop-thirdparty] steveloughran opened a new pull request, #19: HADOOP-18197. Upgrade protobuf to 3.20.1

2022-06-08 Thread GitBox


steveloughran opened a new pull request, #19:
URL: https://github.com/apache/hadoop-thirdparty/pull/19

   
   This patch bumps up the protobuf version so that Hadoop
   is not a vulnerable to CVE-2021-22569.
   
   I'm not renaming the module hadoop-shaded-protobuf_3_7
   because that significantly complicates imports/upgrading.
   That said, I don't see why the version number needed to be
   included there. We will have to live with that.
   
   This also fixes up the parent POM references in the child modules
   as IntelliJ requires a full path.
   
   Testing: needs to go through hadoop built with the updated jar and
   with its own protobuf version marker updated.
   Verified hadoop compiles on a macbook m1.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Filesystem API shim library to assist applications still targeting previous hadoop releases.

2022-06-08 Thread Ayush Saxena
Just answering the last point:
>
> Is there a way we could release it as an ASF Library but on a different
> timetable from normal Hadoop releases? There is always incubator, but this
> is such a minor project it is closer to the org.apache.hadoop.thirdparty
> library in that it is something all current committers okay should be able
> to commit to and release, while releasing on a schedule independent of
> hadoop releases themselves. Having it come from this project should give it
> more legitimacy.


Possible options I can think:

   - Checkin as part of hadoop trunk code as a separate module and make
   sure it isn't part of the normal release, like ozone & submarine were doing
   in the early days, they were part of the hadoop code base, but were
   following a different release cycle.
   - Get it in as a separate repository under hadoop, like
   hadoop-thirdparty and again how ozone & submarine were operating
   just before leaving.
   - Incubator stuff: which you already said no. but the option is still
   there if all fail.
   - Can adjust as a module in hadoop-thirdparty as well and pair the
   release with thirdparty release, but might not make sense because of the
   name 'thirdparty' and it will still have release dependencies for you.


The easiest might be the first option, cleanest might be second. In case
you tend to have a separate repo or something like that you need to
setup the Jenkins jobs and all to run the PreCommit stuff and have some
test coverage as well for the code you are checking in.

Whatever option you choose, I think that would require a formal vote,
atleast the 2nd & 4th option would, 3rd I don't know how they operate, For
1st also better to have one to prevent people coming and shouting in the
end. :-)

-Ayush

On Wed, 8 Jun 2022 at 20:55, Steve Loughran 
wrote:

> I've just created an initial project "fs-api-shim" to provide controlled
> access to the hadoop 3.3.3+ filesystem API calls on hadoop 3.2.0+ releases
> https://github.com/steveloughran/fs-api-shim
>
> The goal here is to make it possible for core file format libraries
> (Parquet, Avro, ORC, Arrow etc) and other apps (HBase, ...) to take
> advantage of those APIs which we have updated and optimised for access to
> cloud stores. Currently the applications do not and are under performance
> on recent releases. I have the ability to change our internal forks but I
> would like to let others gain from the changes and avoid having to diverge
> i'll internal libraries too much.
>
> Currently too many libraries seen frozen in time
>
> Avro: still rejecting changes which don't compile on hadoop 2
> https://github.com/apache/avro/pull/1431
>
> Parquet: still using reflection to access non hadoop 1.x filesystem API
> calls
> https://github.com/apache/parquet-mr/pull/971
>
> I'm not going to support hadoop 2.10 —but we can at least say "move up to
> hadoop 3.2.x and we will let you use later APIs when available"
>
> some calls, like openFile() will work everywhere; on versions with the open
> file builder API they will take the final status and fake policy so let
> libraries declare whether they are random/sequential is IO and skip those
> HEAD requests on the object stores they do to verify that the file exists
> and determine its length for the ranged GET call requests which will
> follow.
>
>
> https://github.com/steveloughran/fs-api-shim/blob/main/fs-api-shim-library/src/main/java/org/apache/hadoop/fs/shim/FileSystemShim.java#L38
>
> On Hadoop 3.2.x, or if openFile() fails for some reason, it will just
> downgrade to the classic open() call.
>
> Other API calls we can support dynamic binding to through reflection but
> not actually fallback if they are unavailable. This will allow libraries to
> use the API calls if present but force them to come up with alternative
> solutions if not.
>
> A key part of this is FSDataInputStream, where the ByteBufferReadable API
> would be benefit to Parquet
>
>
> https://github.com/steveloughran/fs-api-shim/blob/main/fs-api-shim-library/src/main/java/org/apache/hadoop/fs/shim/FSDataInputStreamShim.java
>
> When we get the vectored IO feature branch in, we can offer similar
> reflection-based access. It means applications can compile on hadoop 3.2.x
> and 3.3.x but still take advantage of the APIs when they are on a version
> without it.
>
> I'm going to stay clear of more complicated APIs which don't offer tangible
> performance gains and which are very hard to do (IOStatistics).
>
> Testing is fun; I have a plan there which consists of FS contract tests in
> the shim test source tree to verify the 3.2.0 functionality and an adjacent
> module which will run those same tests against more recent versions. I need
> test will have to beat targetable against objects doors as well as local
> and mini HGFS for systems
>
> This is all in github; however it is very much a hadoop extension library.
> Is there a way we could release it as an ASF Library but on a different
> timetable from normal 

[jira] [Resolved] (HADOOP-12020) Support AWS S3 reduced redundancy storage class

2022-06-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12020.
-
Fix Version/s: 3.3.4
   Resolution: Fixed

merged to trunk and 3.3.4. thanks!

> Support AWS S3 reduced redundancy storage class
> ---
>
> Key: HADOOP-12020
> URL: https://issues.apache.org/jira/browse/HADOOP-12020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
> Environment: Hadoop on AWS
>Reporter: Yann Landrin-Schweitzer
>Assignee: Monthon Klongklaew
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Amazon S3 uses, by default, the NORMAL_STORAGE class for s3 objects.
> This offers, according to Amazon's material, 99.% reliability.
> For many applications, however, the 99.99% reliability offered by the 
> REDUCED_REDUNDANCY storage class is amply sufficient, and comes with a 
> significant cost saving.
> HDFS, when using the legacy s3n protocol, or the new s3a scheme, should 
> support overriding the default storage class of created s3 objects so that 
> users can take advantage of this cost benefit.
> This would require minor changes of the s3n and s3a drivers, using 
> a configuration property fs.s3n.storage.class to override the default storage 
> when desirable. 
> This override could be implemented in Jets3tNativeFileSystemStore with:
>   S3Object object = new S3Object(key);
>   ...
>   if(storageClass!=null)  object.setStorageClass(storageClass);
> It would take a more complex form in s3a, e.g. setting:
> InitiateMultipartUploadRequest initiateMPURequest =
> new InitiateMultipartUploadRequest(bucket, key, om);
> if(storageClass !=null ) {
> initiateMPURequest = 
> initiateMPURequest.withStorageClass(storageClass);
> }
> and similar statements in various places.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18281) Tune S3A storage class support

2022-06-08 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18281:
---

 Summary: Tune S3A storage class support
 Key: HADOOP-18281
 URL: https://issues.apache.org/jira/browse/HADOOP-18281
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.3.4
 Environment: 
Followup to HADOOP-12020, with work/review from rebasing HADOOP-17833 atop it.

* Can we merge ITestS3AHugeFilesStorageClass into one of the existing test 
cases? just because it is slow...ideally we want as few of those as possible, 
even if by testing multiple things at the same we break the rules of testing.
* move setting the storage class into
setOptionalMultipartUploadRequestParameters and setOptionalPutRequestParameters
* both newPutObjectRequest() calls to set storage class

Once HADOOP-17833 is in, make this a new option something which can be 
explicitly used in createFile().
I've updated PutObjectOptions to pass a value around, and made sure it gets 
down to to the request factory. that leaves
* setting the storage class from the options {{CreateFileBuilder}}
* testing
* doc update

Reporter: Steve Loughran






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] ayushtkn commented on pull request #19: HADOOP-18197. Upgrade protobuf to 3.21.1

2022-06-08 Thread GitBox


ayushtkn commented on PR #19:
URL: https://github.com/apache/hadoop-thirdparty/pull/19#issuecomment-1150334147

   >That said, I don't see why the version number needed to be
   included there. We will have to live with that.
   
   That wasn't something we wanted to do that initially, that came as a 
suggestion in the ML thread.
   https://lists.apache.org/thread/v7cqm2bwvrlyhmdl2xo9pg84rvb6t214
   
   Guess as per the ML suggestion, it was to be done for Guava also, but the 
developer forgot about this, so we have to live without that in guava


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18282) Add .asf.yaml to hadoop-thirdparty

2022-06-08 Thread Ayush Saxena (Jira)
Ayush Saxena created HADOOP-18282:
-

 Summary: Add .asf.yaml to hadoop-thirdparty
 Key: HADOOP-18282
 URL: https://issues.apache.org/jira/browse/HADOOP-18282
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ayush Saxena
Assignee: Ayush Saxena


no yaml file in thirdparty, it is dropping mails to common-dev for everything.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] ayushtkn opened a new pull request, #20: HADOOP-18282. Add .asf.yaml to hadoop-thirdparty.

2022-06-08 Thread GitBox


ayushtkn opened a new pull request, #20:
URL: https://github.com/apache/hadoop-thirdparty/pull/20

   Similar to [HADOOP-17234](https://issues.apache.org/jira/browse/HADOOP-17234)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2022-06-08 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithShortCircuitRead 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.scheduler.TestSchedulingWithAllocationRequestId
 
   hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/diff-compile-javac-root.txt
  [488K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/patch-mvnsite-root.txt
  [668K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/patch-javadoc-root.txt
  [40K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [220K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [440K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [120K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/686/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [28K]
   
https://ci-hadoop.apache.org/job/hadoop