[jira] [Created] (HDFS-16806) ec data balancer block blk_id The index error ,Data cannot be moved

2022-10-19 Thread ruiliang (Jira)
ruiliang created HDFS-16806:
---

 Summary: ec data balancer block blk_id The index error ,Data 
cannot be moved
 Key: HDFS-16806
 URL: https://issues.apache.org/jira/browse/HDFS-16806
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.0
Reporter: ruiliang


ec data balancer block blk_id The index error ,Data cannot be moved

dn->10.12.15.149 use disk 100%
{code:java}
echo 10.12.15.149>sorucehost
balancer  -fs hdfs://xxcluster06  -threshold 10 -source -f sorucehost   
2>>~/balancer.log & 
 {code}
{code:java}
datanode logs
...
2022-10-19 14:43:02,031 ERROR datanode.DataNode (DataXceiver.java:run(321)) - 
fs-hiido-dn-12-15-149.xx.com:1019:DataXceiver error processing COPY_BLOCK 
operation  src: /10.12.65.216:58214 dst: /10.12.15.149:1019
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not 
found for 
BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036799576592_4218617
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:492)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:256)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.copyBlock(DataXceiver.java:1089)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opCopyBlock(Receiver.java:291)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:113)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
        at java.lang.Thread.run(Thread.java:748)
...        
hdfs fsck -fs hdfs://xxcluster06 -blockId blk_-9223372036799576592 
Connecting to namenode via 
http://fs-hiido-xxcluster06-yynn2.xx.com:50070/fsck?ugi=hdfs&blockId=blk_-9223372036799576592+&path=%2F
FSCK started by hdfs (auth:KERBEROS_SSL) from /10.12.19.4 at Wed Oct 19 
14:47:15 CST 2022Block Id: blk_-9223372036799576592
Block belongs to: 
/hive_warehouse/warehouse_old_snapshots/yy_mbsdkevent_original/dt=20210505/post_202105052129_33.log.gz
No. of Expected Replica: 5
No. of live Replica: 5
No. of excess Replica: 0
No. of stale Replica: 5
No. of decommissioned Replica: 0
No. of decommissioning Replica: 0
No. of corrupted Replica: 0
Block replica on datanode/rack: fs-hiido-dn-12-66-4.xx.com/4F08-01-09 is HEALTHY
Block replica on datanode/rack: fs-hiido-dn-12-65-244.xx.com/4F08-01-08 is 
HEALTHY
Block replica on datanode/rack: fs-hiido-dn-12-15-149.xx.com/4F08-05-13 is 
HEALTHY
Block replica on datanode/rack: fs-hiido-dn-12-65-218.xx.com/4F08-12-04 is 
HEALTHY
Block replica on datanode/rack: fs-hiido-dn-12-17-35.xx.com/4F08-03-03 is 
HEALTHYhdfs fsck -fs hdfs://xxcluster06 
/hive_warehouse/warehouse_old_snapshots/yy_mbsdkevent_original/dt=20210505/post_202105052129_33.log.gz
 -files -blocks -locations
Connecting to namenode via 
http://xx.com:50070/fsck?ugi=hdfs&files=1&blocks=1&locations=1&path=%2Fhive_warehouse%2Fwarehouse_old_snapshots%2Fyy_mbsdkevent_original%2Fdt%3D20210505%2Fpost_202105052129_33.log.gz
FSCK started by hdfs (auth:KERBEROS_SSL) from /10.12.19.4 for path 
/hive_warehouse/warehouse_old_snapshots/yy_mbsdkevent_original/dt=20210505/post_202105052129_33.log.gz
 at Wed Oct 19 14:48:42 CST 2022
/hive_warehouse/warehouse_old_snapshots/yy_mbsdkevent_original/dt=20210505/post_202105052129_33.log.gz
 500582412 bytes, erasure-coded: policy=RS-3-2-1024k, 1 block(s):  OK
0. BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036799576592_4218617 
len=500582412 Live_repl=5  
[blk_-9223372036799576592:DatanodeInfoWithStorage[10.12.17.35:1019,DS-3ccebf8d-5f05-45b5-ac7f-96d1cfb48608,DISK],
 
blk_-9223372036799576591:DatanodeInfoWithStorage[10.12.65.218:1019,DS-4f8e3114-7566-4cf1-ad5a-e454c8ea8805,DISK],
 
blk_-9223372036799576590:DatanodeInfoWithStorage[10.12.15.149:1019,DS-1dd55c27-8f47-46a6-935b-1d9024ca9188,DISK],
 
blk_-9223372036799576589:DatanodeInfoWithStorage[10.12.65.244:1019,DS-a9ffd747-c427-4aaa-8559-04cded7d9d5f,DISK],
 
blk_-9223372036799576588:DatanodeInfoWithStorage[10.12.66.4:1019,DS-d88f94db-6db1-4753-a652-780d7cd7f081,DISK]]
Status: HEALTHY
 Number of data-nodes:  62
 Number of racks:               19
 Total dirs:                    0
 Total symlinks:                0Replicated Blocks:
 Total size:    0 B
 Total files:   0
 Total blocks (validated):      0
 Minimally replicated blocks:   0
 Over-replicated blocks:        0
 Under-replicated blocks:       0
 Mis-replicated blocks:         0
 Default replication factor:    3
 Average block replication:     0.0
 Missing blocks:                0
 Corrupt blocks:                0
 Missing replicas:              0Erasure Coded Block Groups:
 Total size:    500582412 B
 Total files:   1
 Total block groups (validated):        1 (avg. block group size 500582412 B)
 Minimally erasure-coded block groups:  1 (100.0 %)
 Over-erasure-coded block groups:       0 (0.0 %)
 Under-erasure-coded block groups:     

Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2022-10-19 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.tools.TestDistCpSystem 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/diff-compile-javac-root.txt
  [488K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/patch-mvnsite-root.txt
  [568K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/patch-javadoc-root.txt
  [40K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [220K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [432K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [72K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [116K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/819/art

[jira] [Created] (HDFS-16807) Improve legacy ClientProtocol#rename2() interface

2022-10-19 Thread JiangHua Zhu (Jira)
JiangHua Zhu created HDFS-16807:
---

 Summary: Improve legacy ClientProtocol#rename2() interface
 Key: HDFS-16807
 URL: https://issues.apache.org/jira/browse/HDFS-16807
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Affects Versions: 3.3.3
Reporter: JiangHua Zhu


In HDFS-2298, rename2() replaced rename(), which is a very meaningful 
improvement. It looks like some old customs are still preserved, they are:
1. When using the shell to execute the mv command, rename() is still used.
./bin/hdfs dfs -mv [source] [target]
{code:java}
In MoveCommands#Rename:
protected void processPath(PathData src, PathData target) throws 
IOException {
  ..
  if (!target.fs.rename(src.path, target.path)) {
// we have no way to know the actual error...
throw new PathIOException(src.toString());
  }
}
{code}

2. When NNThroughputBenchmark verifies the rename.
In NNThroughputBenchmark#RenameFileStats:
{code:java}
long executeOp(int daemonId, int inputIdx, String ignore)
throws IOException {
  long start = Time.now();
  clientProto.rename(fileNames[daemonId][inputIdx],
  destNames[daemonId][inputIdx]);
  long end = Time.now();
  return end-start;
}
{code}

I think the interface should be kept uniform since rename() is deprecated. For 
NNThroughputBenchmark, it's easy. But it is not easy to improve MoveCommands, 
because it involves the transformation of FileSystem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16808) HDFS metrics will hold the previous value if there is no new call

2022-10-19 Thread leo sun (Jira)
leo sun created HDFS-16808:
--

 Summary: HDFS metrics will hold the previous value if there is no 
new call
 Key: HDFS-16808
 URL: https://issues.apache.org/jira/browse/HDFS-16808
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: leo sun
 Attachments: image-2022-10-19-23-59-19-673.png

According to the implementation of MutableStat.snapshot(), HDFS metrics will 
always hold the previous value if there is no more new call.

It will cause even if user switch active and standby, the previous ANN(standby 
now) will always output the old value as the pic shows

!image-2022-10-19-23-59-19-673.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2022-10-19 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/

[Oct 18, 2022, 3:28:56 AM] (noreply) HADOOP-18497. Upgrade commons-text version 
to fix CVE-2022-42889. (#5037). Contributed by PJ Fanning.
[Oct 18, 2022, 1:40:39 PM] (noreply) YARN-11247. Remove unused classes 
introduced by YARN-9615. (#4720)
[Oct 18, 2022, 1:53:02 PM] (noreply) HADOOP-18476. Abfs and S3A FileContext 
bindings to close wrapped filesystems in finalizer (#4966)




-1 overall


The following subsystems voted -1:
blanks hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/results-compile-javac-root.txt
 [528K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/blanks-eol.txt
 [14M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/results-checkstyle-root.txt
 [14M]

   hadolint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/results-hadolint.txt
 [8.0K]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/results-javadoc-javadoc-root.txt
 [392K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1018/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
 [32K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2022-10-19 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/383/

[Oct 17, 2022, 10:56:15 AM] (noreply) HDFS-6874. Add GETFILEBLOCKLOCATIONS 
operation to HttpFS (#4750)
[Oct 17, 2022, 5:10:47 PM] (noreply) HADOOP-18156. Address JavaDoc warnings in 
classes like MarkerTool, S3ObjectAttributes, etc (#4965)
[Oct 18, 2022, 3:28:56 AM] (noreply) HADOOP-18497. Upgrade commons-text version 
to fix CVE-2022-42889. (#5037). Contributed by PJ Fanning.
[Oct 18, 2022, 1:40:39 PM] (noreply) YARN-11247. Remove unused classes 
introduced by YARN-9615. (#4720)
[Oct 18, 2022, 1:53:02 PM] (noreply) HADOOP-18476. Abfs and S3A FileContext 
bindings to close wrapped filesystems in finalizer (#4966)




-1 overall


The following subsystems voted -1:
blanks hadolint pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory))
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory))
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
  doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundan

[jira] [Resolved] (HDFS-16803) Improve some annotations in hdfs module

2022-10-19 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu resolved HDFS-16803.
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Improve some annotations in hdfs module
> ---
>
> Key: HDFS-16803
> URL: https://issues.apache.org/jira/browse/HDFS-16803
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Affects Versions: 2.9.2, 3.3.4
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> In hdfs module, some annotations are out of date. E.g:
> {code:java}
>   FSDirRenameOp: 
>   /**
>* @see {@link #unprotectedRenameTo(FSDirectory, String, String, 
> INodesInPath,
>* INodesInPath, long, BlocksMapUpdateInfo, Options.Rename...)}
>*/
>   static RenameResult renameTo(FSDirectory fsd, FSPermissionChecker pc,
>   String src, String dst, BlocksMapUpdateInfo collectedBlocks,
>   boolean logRetryCache,Options.Rename... options)
>   throws IOException {
> {code}
> We should try to improve these annotations to make the documentation look 
> more comfortable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16806) ec data balancer block blk_id The index error ,Data cannot be moved

2022-10-19 Thread ruiliang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ruiliang resolved HDFS-16806.
-
Hadoop Flags: Reviewed
  Resolution: Fixed

> ec data balancer block blk_id The index error ,Data cannot be moved
> ---
>
> Key: HDFS-16806
> URL: https://issues.apache.org/jira/browse/HDFS-16806
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: ruiliang
>Priority: Critical
> Attachments: image-2022-10-20-11-32-35-833.png
>
>
> ec data balancer block blk_id The index error ,Data cannot be moved
> dn->10.12.15.149 use disk 100%
>  
> {code:java}
> echo 10.12.15.149>sorucehost
> balancer  -fs hdfs://xxcluster06  -threshold 10 -source -f sorucehost   
> 2>>~/balancer.log &  {code}
>  
> datanode logs 
> A lot of this log output  
> {code:java}
> datanode logs
> ...
> 2022-10-19 14:43:02,031 ERROR datanode.DataNode (DataXceiver.java:run(321)) - 
> fs-hiido-dn-12-15-149.xx.com:1019:DataXceiver error processing COPY_BLOCK 
> operation  src: /10.12.65.216:58214 dst: /10.12.15.149:1019
> org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not 
> found for 
> BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036799576592_4218617
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:492)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:256)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.copyBlock(DataXceiver.java:1089)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opCopyBlock(Receiver.java:291)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:113)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
>         at java.lang.Thread.run(Thread.java:748)
> ...    
>     
> hdfs fsck -fs hdfs://xxcluster06 -blockId blk_-9223372036799576592 
> Connecting to namenode via 
> http://fs-hiido-xxcluster06-yynn2.xx.com:50070/fsck?ugi=hdfs&blockId=blk_-9223372036799576592+&path=%2F
> FSCK started by hdfs (auth:KERBEROS_SSL) from /10.12.19.4 at Wed Oct 19 
> 14:47:15 CST 2022Block Id: blk_-9223372036799576592
> Block belongs to: 
> /hive_warehouse/warehouse_old_snapshots/yy_mbsdkevent_original/dt=20210505/post_202105052129_33.log.gz
> No. of Expected Replica: 5
> No. of live Replica: 5
> No. of excess Replica: 0
> No. of stale Replica: 5
> No. of decommissioned Replica: 0
> No. of decommissioning Replica: 0
> No. of corrupted Replica: 0
> Block replica on datanode/rack: fs-hiido-dn-12-66-4.xx.com/4F08-01-09 is 
> HEALTHY
> Block replica on datanode/rack: fs-hiido-dn-12-65-244.xx.com/4F08-01-08 is 
> HEALTHY
> Block replica on datanode/rack: fs-hiido-dn-12-15-149.xx.com/4F08-05-13 is 
> HEALTHY
> Block replica on datanode/rack: fs-hiido-dn-12-65-218.xx.com/4F08-12-04 is 
> HEALTHY
> Block replica on datanode/rack: fs-hiido-dn-12-17-35.xx.com/4F08-03-03 is 
> HEALTHY
> hdfs fsck -fs hdfs://xxcluster06 
> /hive_warehouse/warehouse_old_snapshots/yy_mbsdkevent_original/dt=20210505/post_202105052129_33.log.gz
>  -files -blocks -locations
> Connecting to namenode via 
> http://xx.com:50070/fsck?ugi=hdfs&files=1&blocks=1&locations=1&path=%2Fhive_warehouse%2Fwarehouse_old_snapshots%2Fyy_mbsdkevent_original%2Fdt%3D20210505%2Fpost_202105052129_33.log.gz
> FSCK started by hdfs (auth:KERBEROS_SSL) from /10.12.19.4 for path 
> /hive_warehouse/warehouse_old_snapshots/yy_mbsdkevent_original/dt=20210505/post_202105052129_33.log.gz
>  at Wed Oct 19 14:48:42 CST 2022
> /hive_warehouse/warehouse_old_snapshots/yy_mbsdkevent_original/dt=20210505/post_202105052129_33.log.gz
>  500582412 bytes, erasure-coded: policy=RS-3-2-1024k, 1 block(s):  OK
> 0. BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036799576592_4218617 
> len=500582412 Live_repl=5  
> [blk_-9223372036799576592:DatanodeInfoWithStorage[10.12.17.35:1019,DS-3ccebf8d-5f05-45b5-ac7f-96d1cfb48608,DISK],
>  
> blk_-9223372036799576591:DatanodeInfoWithStorage[10.12.65.218:1019,DS-4f8e3114-7566-4cf1-ad5a-e454c8ea8805,DISK],
>  
> blk_-9223372036799576590:DatanodeInfoWithStorage[10.12.15.149:1019,DS-1dd55c27-8f47-46a6-935b-1d9024ca9188,DISK],
>  
> blk_-9223372036799576589:DatanodeInfoWithStorage[10.12.65.244:1019,DS-a9ffd747-c427-4aaa-8559-04cded7d9d5f,DISK],
>  
> blk_-9223372036799576588:DatanodeInfoWithStorage[10.12.66.4:1019,DS-d88f94db-6db1-4753-a652-780d7cd7f081,DISK]]
> Status: HEALTHY
>  Number of data-nodes:  62
>  Number of racks:               19
>  Total dirs:                    0
>  Total symlinks:                0Replicated Blocks:
>  Total size:    0 B
>  Total files:   0
>  Total blocks (validated):      0
>  Minimally replicated blocks:   0
>  Over-replica