I found the cause of the error. Because of a corrupt block.
However, the length of the corrupt block and the GS are the same as a
normal Block.
Therefore, HDFS cannot recognize that block as a corrupt block.
I excluded the datanode that had a corrupt block.
Thanks.
2021년 6월 28일 (월) 오후 2:02, Minwo
Hello,
I met a strange issue. However, I don't understand why it occurred.
That is "On-disk size without header provided is 65347, but block header
contains -620432417. Block offset: 117193180315, data starts with:".
Call stack:
at
org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeW