Re: Collect feedback for HDFS-15638

2020-10-18 Thread Stephen O'Donnell
I agree with Owen on this - I don't think this is a feature we should add
to HDFS.

If managing the permissions for Hive tables is becoming a big overhead for
you, you should look into something like Sentry. It allows you to manage
the permissions of all the files and folders under Hive tables in a
centralized place. It is also more memory efficient inside the namenode, as
it does not store an ACL object against each file. Sentry also allows for
more than 32 ACLs, which is the normal HDFS limit. At Cloudera, we see a
lot of clusters using Sentry to manage Hive table permissions.

Sentry simply uses the existing HDFS Attribute Provider interface, so it
theory, it would be fairly simple to create a plugin of your own to do just
what you need, but as Sentry exists and is fairly well proven in Hive
environments already, it would be simpler to just use it.


On Sat, Oct 17, 2020 at 3:23 PM Xinli shang  wrote:

> Hi Vinayakumar,
>
> The staging tables are dynamic. From the Hadoop security team perspective,
> it is unrealistic to force every data writer to do that because they are so
> many and they write in different ways.
>
> Rename is just one scenario and there are other scenarios. For example,
> when permission is changed, we need to apply that change to every file
> today. If we can have that flag, we only change the table. or
> partition directories.
>
> Xinli
>
>
> On Sat, Oct 17, 2020 at 12:14 AM Vinayakumar B 
> wrote:
>
> > IIUC, hive renames are from hive’s staging directory during write to
> final
> > destination within table.
> >
> > Why not set the default ACLs of staging directory to whatever expected,
> and
> > then continue write remaining files.
> >
> > In this way even after rename you will have expected ACLs on the final
> > files.
> >
> > Setting default ACLs on staging directory can be done using single RPC.
> >
> > -Vinay
> >
> > On Sat, 17 Oct 2020 at 8:08 AM, Xinli shang 
> > wrote:
> >
> > > Thanks Owen for your reply! As mentioned in the Jira, default ACLs
> don't
> > > apply to rename. Any idea how rename can work without setting ACLs per
> > > file?
> > >
> > > On Fri, Oct 16, 2020 at 7:25 PM Owen O'Malley 
> > > wrote:
> > >
> > > > I'm very -1 on adding these semantics.
> > > >
> > > > When you create the table's directory, set the default ACL. That will
> > > have
> > > > exactly the effect that you are looking for without creating
> additional
> > > > semantics.
> > > >
> > > > .. Owen
> > > >
> > > > On Fri, Oct 16, 2020 at 7:02 PM Xinli shang  >
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I opened https://issues.apache.org/jira/browse/HDFS-15638 and want
> > to
> > > > > collect feedback from the community. I know whenever changing the
> > > > > permission model that follows POSIX model is never a trivial
> change.
> > So
> > > > > please comment on if you have concerns. For reading convenience,
> here
> > > is
> > > > a
> > > > > copy of the ticket.
> > > > >
> > > > > *Problem*: Currently, when a user tries to accesses a file he/she
> > needs
> > > > the
> > > > > permissions of it's parent and ancestors and the permission of that
> > > file.
> > > > > This is correct generally, but for Hive tables directories/files,
> all
> > > the
> > > > > files under a partition or even a table usually have the same
> > > permissions
> > > > > for the same set of ACL groups. Although the permissions and ACL
> > groups
> > > > are
> > > > > the same, the writer still need to call setfacl() for every file to
> > add
> > > > > LDAP groups. This results in a huge amount of RPC calls to NN. HDFS
> > has
> > > > > default ACL to solve that but that only applies to create and copy,
> > but
> > > > not
> > > > > apply to rename. However, in Hive ETL, rename is very common.
> > > > >
> > > > > *Proposal*: Add a 1-bit flag to directory inodes to indicate
> whether
> > or
> > > > not
> > > > > it is a Hive table directory. If that flag is set, then all the
> > > > > sub-directory and files under it will just use it's permission and
> > ACL
> > > > > groups settings. By doing this way, Hive ETL doesn't need to set
> > > > > permissions at the file level. If that flag is not set(by default),
> > > work
> > > > as
> > > > > before. To set/unset that flag, it would require admin privilege.
> > > > >
> > > > > --
> > > > > Xinli Shang
> > > > >
> > > >
> > >
> > >
> > > --
> > > Xinli Shang
> > >
> > --
> > -Vinay
> >
>


Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-10-18 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed junit tests :

   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
  

   jshint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/diff-patch-jshint.txt
  [208K]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/diff-compile-javac-root.txt
  [436K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [388K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [120K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/90/artifact/out/patch-un

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-10-18 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/

[Oct 17, 2020 6:31:18 AM] (noreply) HADOOP-17288. Use shaded guava from 
thirdparty. (#2342). Contributed by Ayush Saxena.




-1 overall


The following subsystems voted -1:
compile findbugs golang mvninstall mvnsite pathlen shadedclient unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked 
   hadoop.fs.azure.TestNativeAzureFileSystemMocked 
   hadoop.fs.azure.TestBlobMetadata 
   hadoop.fs.azure.TestNativeAzureFileSystemConcurrency 
   hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck 
   hadoop.fs.azure.TestNativeAzureFileSystemContractMocked 
   hadoop.fs.azure.TestWasbFsck 
   hadoop.fs.azure.TestOutOfBandAzureBlobOperations 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   mvninstall:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/patch-mvninstall-root.txt
  [0]

   compile:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/patch-compile-root.txt
  [0]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/patch-compile-root.txt
  [0]

   golang:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/patch-compile-root.txt
  [0]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/patch-compile-root.txt
  [0]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/buildtool-patch-checkstyle-root.txt
  [0]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/patch-mvnsite-root.txt
  [0]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/whitespace-eol.txt
  [13M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/whitespace-tabs.txt
  [1.9M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/xml.txt
  [24K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/diff-javadoc-javadoc-root.txt
  [1.3M]

   findbugs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/branch-findbugs-root.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/branch-findbugs-hadoop-assemblies.txt
  [0]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/branch-findbugs-hadoop-cloud-storage-project.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/artifact/out/branch-findbugs-hadoop-hdfs-project.txt
  [96K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/a

Re: Collect feedback for HDFS-15638

2020-10-18 Thread Xinli shang
We are using Apache Sentry. On the large scale of HDFS, which is our case,
we see a performance downgrade when enabling the Sentry plugin in
NameNode.  So we have to disable the plugin in NN and map Sentry policies
to HDFS ACL. It works great so far. This is the only major issue we see.

On Sun, Oct 18, 2020 at 1:19 AM Stephen O'Donnell
 wrote:

> I agree with Owen on this - I don't think this is a feature we should add
> to HDFS.
>
> If managing the permissions for Hive tables is becoming a big overhead for
> you, you should look into something like Sentry. It allows you to manage
> the permissions of all the files and folders under Hive tables in a
> centralized place. It is also more memory efficient inside the namenode, as
> it does not store an ACL object against each file. Sentry also allows for
> more than 32 ACLs, which is the normal HDFS limit. At Cloudera, we see a
> lot of clusters using Sentry to manage Hive table permissions.
>
> Sentry simply uses the existing HDFS Attribute Provider interface, so it
> theory, it would be fairly simple to create a plugin of your own to do just
> what you need, but as Sentry exists and is fairly well proven in Hive
> environments already, it would be simpler to just use it.
>
>
> On Sat, Oct 17, 2020 at 3:23 PM Xinli shang 
> wrote:
>
> > Hi Vinayakumar,
> >
> > The staging tables are dynamic. From the Hadoop security team
> perspective,
> > it is unrealistic to force every data writer to do that because they are
> so
> > many and they write in different ways.
> >
> > Rename is just one scenario and there are other scenarios. For example,
> > when permission is changed, we need to apply that change to every file
> > today. If we can have that flag, we only change the table. or
> > partition directories.
> >
> > Xinli
> >
> >
> > On Sat, Oct 17, 2020 at 12:14 AM Vinayakumar B 
> > wrote:
> >
> > > IIUC, hive renames are from hive’s staging directory during write to
> > final
> > > destination within table.
> > >
> > > Why not set the default ACLs of staging directory to whatever expected,
> > and
> > > then continue write remaining files.
> > >
> > > In this way even after rename you will have expected ACLs on the final
> > > files.
> > >
> > > Setting default ACLs on staging directory can be done using single RPC.
> > >
> > > -Vinay
> > >
> > > On Sat, 17 Oct 2020 at 8:08 AM, Xinli shang 
> > > wrote:
> > >
> > > > Thanks Owen for your reply! As mentioned in the Jira, default ACLs
> > don't
> > > > apply to rename. Any idea how rename can work without setting ACLs
> per
> > > > file?
> > > >
> > > > On Fri, Oct 16, 2020 at 7:25 PM Owen O'Malley <
> owen.omal...@gmail.com>
> > > > wrote:
> > > >
> > > > > I'm very -1 on adding these semantics.
> > > > >
> > > > > When you create the table's directory, set the default ACL. That
> will
> > > > have
> > > > > exactly the effect that you are looking for without creating
> > additional
> > > > > semantics.
> > > > >
> > > > > .. Owen
> > > > >
> > > > > On Fri, Oct 16, 2020 at 7:02 PM Xinli shang
>  > >
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I opened https://issues.apache.org/jira/browse/HDFS-15638 and
> want
> > > to
> > > > > > collect feedback from the community. I know whenever changing the
> > > > > > permission model that follows POSIX model is never a trivial
> > change.
> > > So
> > > > > > please comment on if you have concerns. For reading convenience,
> > here
> > > > is
> > > > > a
> > > > > > copy of the ticket.
> > > > > >
> > > > > > *Problem*: Currently, when a user tries to accesses a file he/she
> > > needs
> > > > > the
> > > > > > permissions of it's parent and ancestors and the permission of
> that
> > > > file.
> > > > > > This is correct generally, but for Hive tables directories/files,
> > all
> > > > the
> > > > > > files under a partition or even a table usually have the same
> > > > permissions
> > > > > > for the same set of ACL groups. Although the permissions and ACL
> > > groups
> > > > > are
> > > > > > the same, the writer still need to call setfacl() for every file
> to
> > > add
> > > > > > LDAP groups. This results in a huge amount of RPC calls to NN.
> HDFS
> > > has
> > > > > > default ACL to solve that but that only applies to create and
> copy,
> > > but
> > > > > not
> > > > > > apply to rename. However, in Hive ETL, rename is very common.
> > > > > >
> > > > > > *Proposal*: Add a 1-bit flag to directory inodes to indicate
> > whether
> > > or
> > > > > not
> > > > > > it is a Hive table directory. If that flag is set, then all the
> > > > > > sub-directory and files under it will just use it's permission
> and
> > > ACL
> > > > > > groups settings. By doing this way, Hive ETL doesn't need to set
> > > > > > permissions at the file level. If that flag is not set(by
> default),
> > > > work
> > > > > as
> > > > > > before. To set/unset that flag, it would require admin privilege.
> > > > > >
> > > > > > --
> > > > > > Xinli Shang
> > > > > >
> >

Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2020-10-18 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/32/

[Oct 17, 2020 6:31:18 AM] (noreply) HADOOP-17288. Use shaded guava from 
thirdparty. (#2342). Contributed by Ayush Saxena.




-1 overall


The following subsystems voted -1:
blanks findbugs mvnsite pathlen shadedclient unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

findbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 695] 

findbugs :

   module:hadoop-hdfs-project 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 695] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 356] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 333] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localize

[jira] [Created] (HDFS-15639) [JDK 11] Fix Javadoc errors in hadoop-hdfs-client

2020-10-18 Thread Takanobu Asanuma (Jira)
Takanobu Asanuma created HDFS-15639:
---

 Summary: [JDK 11] Fix Javadoc errors in hadoop-hdfs-client
 Key: HDFS-15639
 URL: https://issues.apache.org/jira/browse/HDFS-15639
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org