DFSClient should handle all nodes in a pipeline failed.
-------------------------------------------------------

                 Key: HDFS-951
                 URL: https://issues.apache.org/jira/browse/HDFS-951
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: He Yongqiang


processDatanodeError-> setupPipelineForAppendOrRecovery  will set 
streamerClosed to be true if all nodes in the pipeline failed in the past, and 
just return.
Back to run() in data streammer,  the logic 
 if (streamerClosed || hasError || dataQueue.size() == 0 || !clientRunning) {
                continue;
  }
will just let set closed=true in closeInternal().

And DataOutputStream will not get a chance to clean up. The DataOutputStream 
will throw exception or return null for following write/close.
It will leave the file in writing in incomplete state.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to