[jira] [Created] (HADOOP-17064) Drop MRv1 binary compatibility in 4.0.0

2020-06-04 Thread Rungroj Maipradit (Jira)
Rungroj Maipradit created HADOOP-17064:
--

 Summary: Drop MRv1 binary compatibility in 4.0.0
 Key: HADOOP-17064
 URL: https://issues.apache.org/jira/browse/HADOOP-17064
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Rungroj Maipradit


A code comment suggests making the setJobConf method deprecated along with 
mapred package HADOOP-1230. HADOOP-1230 has been closed a long time ago, but 
the method is still not annotated as deprecated.
{code:java}
 /**
   * This code is to support backward compatibility and break the compile  
   * time dependency of core on mapred.
   * This should be made deprecated along with the mapred package HADOOP-1230. 
   * Should be removed when mapred package is removed.
   */ {code}
Comment location: 
[https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java#L88]

>From the previous discussion, it seems that this method is still required if 
>we ensure binary compatibility with MRv1. 
 
https://issues.apache.org/jira/browse/HADOOP-17047?focusedCommentId=17111702&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17111702

Mingliang Liu suggested to Drop MRv1 binary compatibility in 4.0.0
 
https://issues.apache.org/jira/browse/HADOOP-17047?focusedCommentId=17112442&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17112442



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: mvnsite-hadoop-common fails on Yetus

2020-06-04 Thread Akira Ajisaka
Fixed.

-Akira

On Sat, May 30, 2020 at 4:54 PM Akira Ajisaka  wrote:

> It seems this error always occurs.
> Filed https://issues.apache.org/jira/browse/HADOOP-17056
>
> I'm trying to find the root cause.
>
> Thanks,
> Akira
>
> On Fri, May 29, 2020 at 5:31 AM Wei-Chiu Chuang 
> wrote:
>
>> Stephen raised a similar question today. My hunch is there's a bad Jenkins
>> slave machine but haven't got any further.
>>
>> On Thu, May 28, 2020 at 12:27 PM Ahmed Hussein  wrote:
>>
>> > Hi,
>> >
>> > Yetus fails in hadoop-common's pre-commit build for quite some time. It
>> > fails for the trunk branch as well.
>> > Any idea how to fix this?
>> >
>> > https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/console
>> >
>> >
>> >
>> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
>> >
>> >
>> > [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
>> > > ERROR: yetus-dl: gpg unable to import
>> > >
>> >
>> /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
>> > > [INFO]
>> > >
>> 
>> > > [INFO] BUILD FAILURE
>> > > [INFO]
>> > >
>> 
>> > > [INFO] Total time:  9.377 s
>> > > [INFO] Finished at: 2020-05-28T17:37:41Z
>> > > [INFO]
>> > >
>> 
>> > > [ERROR] Failed to execute goal
>> > > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
>> > > hadoop-common: Command execution failed. Process exited with an
>> error: 1
>> > > (Exit value: 1) -> [Help 1]
>> > > [ERROR]
>> > > [ERROR] To see the full stack trace of the errors, re-run Maven with
>> the
>> > > -e switch.
>> > > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>> > > [ERROR]
>> > > [ERROR] For more information about the errors and possible solutions,
>> > > please read the following articles:
>> > > [ERROR] [Help 1]
>> > >
>> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
>> >
>> >
>> > --
>> > Best Regards,
>> >
>> > *Ahmed Hussein, PhD*
>> >
>>
>


Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-06-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/706/

[Jun 3, 2020 9:08:28 PM] (aajisaka) HADOOP-17062. Fix shelldocs path in 
Jenkinsfile (#2049)
[Jun 3, 2020 9:16:38 PM] (inigoiri) HDFS-14054. TestLeaseRecovery2:




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 515] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 383] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 389] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
92] 
   Useless object stored in variable seqOs of method 
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier,

[jira] [Created] (HADOOP-17065) Adding Network Counters in ABFS

2020-06-04 Thread Mehakmeet Singh (Jira)
Mehakmeet Singh created HADOOP-17065:


 Summary: Adding Network Counters in ABFS
 Key: HADOOP-17065
 URL: https://issues.apache.org/jira/browse/HADOOP-17065
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Mehakmeet Singh
Assignee: Mehakmeet Singh


Network Counters to be added in ABFS:
|CONNECTIONS_MADE|Number of times connection was made with Azure Data Lake|
|SEND_REQUESTS|Number of send requests|
|GET_RESPONSE|Number of response gotten|
|BYTES_SEND|Number of bytes send|
|BYTES_RECEIVED|Number of bytes received|
|READ_THROTTLE|Number of times throttled while read operation|
|WRITE_THROTTLE|Number of times throttled while write operation|

propose:
 * Adding these counters as part of AbfsStatistic already made in HADOOP-17016.
 * Increment of counters across Abfs Network services.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [NOTICE] Removal of protobuf classes from Hadoop Token's public APIs' signature

2020-06-04 Thread Brahma Reddy Battula
Yes, it's blocker for 3.3.0..Just I hold release for issue.

On Tue, Jun 2, 2020 at 7:08 AM Akira Ajisaka  wrote:

> > Please check https://issues.apache.org/jira/browse/HADOOP-17046
> > This Jira proposes to keep existing ProtobuRpcEngine as-is (without
> shading and with protobuf-2.5.0 implementation) to support downstream
> implementations.
>
> Thank you, Vinay. I checked the PR and it mostly looks good.
> How do we proceed with?
>
> I suppose Hadoop 3.3.0 is blocked by this issue. Is it true or not?
>
> Thanks,
> Akira
>
> On Tue, May 19, 2020 at 2:06 AM Eric Yang  wrote:
>
> > ProtobufHelper should not be a public API.  Hadoop uses protobuf
> > serialization to expertise RPC performance with many drawbacks.  The
> > generalized object usually require another indirection to map to usable
> > Java object, this is making Hadoop code messy, and that is topic for
> > another day.  The main challenges for UGI class is making the system
> > difficult to secure.
> >
> > In Google's world, gRPC is built on top of protobuf + HTTP/2 binary
> > protocol, and secured by JWT token with Google.  This means before
> > deserializing a protobuf object on the wire, the call must deserialize a
> > JSON token to determine if the call is authenticated before deserializing
> > application objects.  Hence, using protobuf for RPC is no longer a good
> > reason for performance gain over JSON because JWT token deserialization
> > happens on every gRPC call to ensure the request is secured properly.
> >
> > In Hadoop world, we are not using JWT token for authentication, we have
> > pluggable token implementation either SPNEGO, delegation token or some
> kind
> > of SASL.  UGI class should not allow protobuf token to be exposed as
> public
> > interface, otherwise down stream application can forge the protobuf token
> > and it will become a privilege escalation issue.  In my opinion, UGI
> class
> > must be as private as possible to prevent forgery.  Down stream
> application
> > are discouraged from using UGI.doAs for impersonation to reduce
> privileges
> > escalation.  Instead, the downstream application should running like Unix
> > daemon instead of root.  This will ensure that vulnerability for one
> > application does not spill over security problems to another application.
> > Some people will disagree with the statement because existing application
> > is already written to take advantage of UGI.doAs, such as Hive loading
> > external table.  Fortunately, Hive provides an option to run without
> doAs.
> >
> > Protobuf is not suitable candidate for security token transport because
> it
> > is a strong type transport.  If multiple tokens are transported with UGI
> > protobuf, small difference in ASCII, UTF-8, or UTF-16 can cause
> > conversion ambiguity that might create security holes or headache on type
> > casting.  I am +1 on removing protobuf from Hadoop Token API.  Hadoop
> Token
> > as byte array, and default to JSON serializer is probably simpler
> solution
> > to keep the system robust without repeating the past mistakes.
> >
> > regards,
> > Eric
> >
> > On Sun, May 17, 2020 at 11:56 PM Vinayakumar B 
> > wrote:
> >
> > > Hi Wei-chu and steve,
> > >
> > > Thanks for sharing insights.
> > >
> > > I have also tried to compile and execute ozone pointing to
> > > trunk(3.4.0-SNAPSHOT) which have shaded and upgraded protobuf.
> > >
> > > Other than just the usage of internal protobuf APIs, because of which
> > > compilation would break, I found another major problem was, the
> > Hadoop-rpc
> > > implementations in downstreams which is based on non-shaded Protobuf
> > > classes.
> > >
> > > 'ProtobufRpcEngine' takes arguments and tries to typecast to Protobuf
> > > 'Message', which its expecting to be of 3.7 version and shaded package
> > > (i.e. o.a.h.thirdparty.*).
> > >
> > > So,unless downstreams upgrade their protobuf classes to
> > 'hadoop-thirdparty'
> > > this issue will continue to occur, even after solving compilation
> issues
> > > due to internal usage of private APIs with protobuf signatures.
> > >
> > > I found a possible workaround for this problem.
> > > Please check https://issues.apache.org/jira/browse/HADOOP-17046
> > >   This Jira proposes to keep existing ProtobuRpcEngine as-is (without
> > > shading and with protobuf-2.5.0 implementation) to support downstream
> > > implementations.
> > >   Use new ProtobufRpcEngine2 to use shaded protobuf classes within
> Hadoop
> > > and later projects who wish to upgrade their protobufs to 3.x.
> > >
> > > For Ozone compilation:
> > >   I have submitted to PRs to make preparations to adopt to Hadoop 3.3+
> > > upgrade. These PRs will remove dependency on Hadoop for those internal
> > APIs
> > > and implemented their own copy in ozone with non-shaded protobuf.
> > > HDDS-3603: https://github.com/apache/hadoop-ozone/pull/93
> > > 2
> > > HDDS-3604: https://github.com/apache/hadoop-ozone/pull/933
> > >
> > > 

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-06-04 Thread Brahma Reddy Battula
Following blocker is pending for 3.3.0 release which is ready for review.
Mostly we'll have RC soon.
https://issues.apache.org/jira/browse/HADOOP-17046

Protobuf dependency was unexpected .

On Mon, Jun 1, 2020 at 7:11 AM Sheng Liu  wrote:

> Hi folks,
>
> It looks like the 3.3.0 branch has been created for quite a while. Not sure
> if there is remain block issue that need to be addressed before Hadoop
> 3.3.0 release publishing, maybe we can bring up to here and move the
> release forward ?
>
> Thank.
>
> Brahma Reddy Battula  于2020年3月25日周三 上午1:55写道:
>
> > thanks to all.
> >
> > will make this as optional..will update the wiki accordingly.
> >
> > On Wed, Mar 18, 2020 at 12:05 AM Vinayakumar B 
> > wrote:
> >
> > > Making ARM artifact optional, makes the release process simpler for RM
> > and
> > > unblocks release process (if there is unavailability of ARM resources).
> > >
> > > Still there are possible options to collaborate with RM ( as brahma
> > > mentioned earlier) and provide ARM artifact may be before or after
> vote.
> > > If feasible RM can decide to add ARM artifact by collaborating with
> > @Brahma
> > > Reddy Battula  or me to get the ARM artifact.
> > >
> > > -Vinay
> > >
> > > On Tue, Mar 17, 2020 at 11:39 PM Arpit Agarwal
> > >  wrote:
> > >
> > > > Thanks for the clarification Brahma. Can you update the proposal to
> > state
> > > > that it is optional (it may help to put the proposal on cwiki)?
> > > >
> > > > Also if we go ahead then the RM documentation should be clear this is
> > an
> > > > optional step.
> > > >
> > > >
> > > > > On Mar 17, 2020, at 11:06 AM, Brahma Reddy Battula <
> > bra...@apache.org>
> > > > wrote:
> > > > >
> > > > > Sure, we can't make mandatory while voting and we can upload to
> > > downloads
> > > > > once release vote is passed.
> > > > >
> > > > > On Tue, 17 Mar 2020 at 11:24 PM, Arpit Agarwal
> > > > >  wrote:
> > > > >
> > > > >>> Sorry,didn't get you...do you mean, once release voting is
> > > > >>> processed and upload by RM..?
> > > > >>
> > > > >> Yes, that is what I meant. I don’t want us to make more mandatory
> > work
> > > > for
> > > > >> the release manager because the job is hard enough already.
> > > > >>
> > > > >>
> > > > >>> On Mar 17, 2020, at 10:46 AM, Brahma Reddy Battula <
> > > bra...@apache.org>
> > > > >> wrote:
> > > > >>>
> > > > >>> Sorry,didn't get you...do you mean, once release voting is
> > processed
> > > > and
> > > > >>> upload by RM..?
> > > > >>>
> > > > >>> FYI. There is docker image for ARM also which support all scripts
> > > > >>> (createrelease, start-build-env.sh, etc ).
> > > > >>>
> > > > >>> https://issues.apache.org/jira/browse/HADOOP-16797
> > > > >>>
> > > > >>> On Tue, Mar 17, 2020 at 10:59 PM Arpit Agarwal
> > > > >>>  wrote:
> > > > >>>
> > > >  Can ARM binaries be provided after the fact? We cannot increase
> > the
> > > > RM’s
> > > >  burden by asking them to generate an extra set of binaries.
> > > > 
> > > > 
> > > > > On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula <
> > > > bra...@apache.org>
> > > >  wrote:
> > > > >
> > > > > + Dev mailing list.
> > > > >
> > > > > -- Forwarded message -
> > > > > From: Brahma Reddy Battula 
> > > > > Date: Tue, Mar 17, 2020 at 10:31 PM
> > > > > Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
> > > > > To: junping_du 
> > > > >
> > > > >
> > > > > thanks junping for your reply.
> > > > >
> > > > > bq.  I think most of us in Hadoop community doesn't want to
> > > have
> > > >  biased
> > > > > on ARM or any other platforms.
> > > > >
> > > > > Yes, release voting will be based on the source
> code.AFAIK,Binary
> > > we
> > > > >> are
> > > > > providing for user to easy to download and verify.
> > > > >
> > > > > bq. The only thing I try to understand is how much
> complexity
> > > get
> > > > > involved for our RM work. Does that potentially become a
> blocker
> > > for
> > > >  future
> > > > > releases? And how we can get rid of this risk.
> > > > >
> > > > > As I mentioned earlier, RM need to access the ARM machine(it
> will
> > > be
> > > > > donated and current qbt also using one ARM machine) and build
> tar
> > > > using
> > > >  the
> > > > > keys. As it can be common machine, RM can delete his keys once
> > > > release
> > > > > approved.
> > > > > Can be sorted out as I mentioned earlier.(For accessing the ARM
> > > > >> machine)
> > > > >
> > > > > bq.   If you can list the concrete work that RM need to do
> > > extra
> > > > >> for
> > > > > ARM release, that would help us to better understand.
> > > > >
> > > > > I can write and update for future reference.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Mar 17, 2020 at 10:41 AM 俊平堵 
> > > wrote:

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-06-04 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/163/

[Jun 3, 2020 6:46:36 AM] (Ayush Saxena) HDFS-14960. TestBalancerWithNodeGroup 
should not succeed with DFSNetworkTopology. Contributed by Jim Brennan.
[Jun 3, 2020 7:17:15 AM] (Ayush Saxena) HDFS-11041. Unable to unregister 
FsDatasetState MBean if DataNode is shutdown twice. Contributed by Wei-Chiu 
Chuang.
[Jun 3, 2020 9:01:37 AM] (noreply) HADOOP-17056. shelldoc fails in 
hadoop-common. (#2045)
[Jun 3, 2020 10:37:40 AM] (noreply) HADOOP-14566. Add seek support for SFTP 
FileSystem. (#1999)
[Jun 3, 2020 4:07:00 PM] (noreply) HADOOP-16568. S3A 
FullCredentialsTokenBinding fails if local credentials are unset. (#1441)
[Jun 3, 2020 9:04:26 PM] (noreply) HADOOP-17062. Fix shelldocs path in 
Jenkinsfile (#2049)




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReade