Lin Yiqun created HDFS-9832:
-------------------------------
Summary: Erasure Coding: Improve exception handling in
ErasureCodingWorker#ReconstructAndTransferBlock
Key: HDFS-9832
URL: https://issues.apache.org/jira/browse/HDFS-9832
Project: Hadoop HDFS
Issue Type: Sub-task
Components: erasure-coding
Reporter: Lin Yiqun
Assignee: Lin Yiqun
Priority: Minor
Fix For: 3.0.0
There are two places in {{ErasureCodingWorker#ReconstructAndTransferBlock}}
that I think can be improved.
1.In run method, the step3 transfer data will be failed sometimes, and this
will cause buffers not be cleared completely, is better to invoke clearBuffer
again in finally handling?
{code}
while (positionInBlock < maxTargetLength) {
final int toReconstruct = (int) Math.min(
bufferSize, maxTargetLength - positionInBlock);
// step1: read from minimum source DNs required for reconstruction.
// The returned success list is the source DNs we do real read from
Map<ExtendedBlock, Set<DatanodeInfo>> corruptionMap = new HashMap<>();
try {
success = readMinimumStripedData4Reconstruction(success,
toReconstruct, corruptionMap);
} finally {
// report corrupted blocks to NN
reportCorruptedBlocks(corruptionMap);
}
// step2: decode to reconstruct targets
reconstructTargets(success, targetsStatus, toReconstruct);
// step3: transfer data
if (transferData2Targets(targetsStatus) == 0) {
String error = "Transfer failed for all targets.";
throw new IOException(error);
}
clearBuffers();
positionInBlock += toReconstruct;
}
{code}
2.Is better to set null to buffers objects, targetsOutput and socket objects in
finally handling code?
{code}
} finally {
datanode.decrementXmitsInProgress();
// close block readers
for (StripedReader stripedReader : stripedReaders) {
closeBlockReader(stripedReader.blockReader);
}
for (int i = 0; i < targets.length; i++) {
IOUtils.closeStream(targetOutputStreams[i]);
IOUtils.closeStream(targetInputStreams[i]);
IOUtils.closeStream(targetSockets[i]);
}
}
{code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)