[jira] [Created] (HDFS-16799) The dn space size is not consistent, and Balancer can not work, resulting in a very unbalanced space

2022-10-09 Thread ruiliang (Jira)
ruiliang created HDFS-16799:
---

 Summary: The dn space size is not consistent, and Balancer can not 
work, resulting in a very unbalanced space
 Key: HDFS-16799
 URL: https://issues.apache.org/jira/browse/HDFS-16799
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.0
Reporter: ruiliang


 
{code:java}
echo 'A DFS Used 99.8% to ip' > sorucehost  
hdfs --debug  balancer  -fs hdfs://xxcluster06  -threshold 10 -source -f 
sorucehost  

22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-01-08/10.12.65.243:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-01-08/10.12.65.247:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-15-10/10.12.65.214:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-02-08/10.12.14.8:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-05-13/10.12.15.154:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-04/10.12.65.218:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.143:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-05-05/10.12.12.200:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.217:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.142:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-01-08/10.12.65.246:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.219:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.147:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-15-10/10.12.65.186:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-05-13/10.12.15.153:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-03-07/10.12.19.23:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-04-14/10.12.65.119:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.131:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-05-04/10.12.12.210:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-05-11/10.12.14.168:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-01-08/10.12.65.245:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-03-02/10.12.17.26:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-01-08/10.12.65.241:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-05-13/10.12.15.152:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-01-08/10.12.65.249:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-07-14/10.12.64.71:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-03-03/10.12.17.35:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-01-08/10.12.65.195:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-01-08/10.12.65.242:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-01-08/10.12.65.248:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-01-08/10.12.65.240:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-15-12/10.12.65.196:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-05-13/10.12.15.150:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.222:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.145:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-01-08/10.12.65.244:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-03-07/10.12.19.22:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.221:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.136:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.129:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-05-15/10.12.15.163:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-07-14/10.12.64.72:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-05-13/10.12.15.149:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.130:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-12-03/10.12.65.220:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-03-01/10.12.17.27:1019
22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: 
/4F08-05-15/10.12.15.162:1019
2

[jira] [Created] (HDFS-16800) Upgrade Huawei OBS client to 3.22.3.1

2022-10-09 Thread Cheng Pan (Jira)
Cheng Pan created HDFS-16800:


 Summary: Upgrade Huawei OBS client to 3.22.3.1
 Key: HDFS-16800
 URL: https://issues.apache.org/jira/browse/HDFS-16800
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Cheng Pan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2022-10-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.mapred.gridmix.TestRecordFactory 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/diff-compile-javac-root.txt
  [488K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/patch-mvnsite-root.txt
  [568K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/patch-javadoc-root.txt
  [40K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [220K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [428K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [72K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [116K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/809/

[jira] [Resolved] (HDFS-16798) SerialNumberMap should decrease current counter if the item exist

2022-10-09 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-16798.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> SerialNumberMap should decrease current counter if the item exist
> -
>
> Key: HDFS-16798
> URL: https://issues.apache.org/jira/browse/HDFS-16798
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> During looking into some code related XATTR, I found there is a bug in 
> SerialNumberMap, as bellow:
> {code:java}
> public int get(T t) {
>   if (t == null) {
> return 0;
>   }
>   Integer sn = t2i.get(t);
>   if (sn == null) {
> sn = current.getAndIncrement();
> if (sn > max) {
>   current.getAndDecrement();
>   throw new IllegalStateException(name + ": serial number map is full");
> }
> Integer old = t2i.putIfAbsent(t, sn);
> if (old != null) {
>   // here: if the old is not null, we should decrease the current value.
>   return old;
> }
> i2t.put(sn, t);
>   }
>   return sn;
> } {code}
> This bug will only cause that the capacity of serialNumberMap is less than 
> expected, no other impact.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2022-10-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1008/

No changes




-1 overall


The following subsystems voted -1:
blanks pathlen xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1008/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1008/artifact/out/results-compile-javac-root.txt
 [528K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1008/artifact/out/blanks-eol.txt
 [14M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1008/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1008/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1008/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1008/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1008/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1008/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1008/artifact/out/results-javadoc-javadoc-root.txt
 [400K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-16801) TestObserverNode failing intermittently

2022-10-09 Thread Ashutosh Gupta (Jira)
Ashutosh Gupta created HDFS-16801:
-

 Summary: TestObserverNode failing intermittently
 Key: HDFS-16801
 URL: https://issues.apache.org/jira/browse/HDFS-16801
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namanode, test
Reporter: Ashutosh Gupta


TestObserverNode failing intermittently

 
{code:java}
[ERROR] 
testMkdirsRaceWithObserverRead(org.apache.hadoop.hdfs.server.namenode.ha.TestObserverNode)
  Time elapsed: 311.199 s  <<< ERROR!
java.net.ConnectException: Call From 75e69096caae/172.17.0.2 to localhost:12852 
failed on connection exception: java.net.ConnectException: Connection refused; 
For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor118.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:948)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:863)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1649)
at org.apache.hadoop.ipc.Client.call(Client.java:1590)
at org.apache.hadoop.ipc.Client.call(Client.java:1487)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
at com.sun.proxy.$Proxy31.mkdirs(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:677)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider$ObserverReadInvocationHandler.invoke(ObserverReadProxyProvider.java:518)
at com.sun.proxy.$Proxy32.mkdirs(Unknown Source)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:437)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:170)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:162)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:100)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:366)
at com.sun.proxy.$Proxy32.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2558)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2534)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1484)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1481)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1498)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdir(DistributedFileSystem.java:1457)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestObserverNode.testMkdirsRaceWithObserverRead(TestObserverNode.java:561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.juni

Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2022-10-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/378/

[Oct 7, 2022, 2:44:01 PM] (noreply) HADOOP-18468: Upgrade jettison to 1.5.1 to 
fix CVE-2022-40149 (#4937)




-1 overall


The following subsystems voted -1:
blanks pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory))
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory))
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
  doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState)) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState))