On 3/17/2015 10:53 AM, xiaohe lan wrote:
Hi Charles,

Thanks for pointing me to the doc, it really helps me a lot.

I am confused by another problem when I read DFSOutputStream.java. When packets a block are being sent through the pipeline, why DataStreamer will wait for all acks of the packets are received before the last packet is sent ? I see in DataSteamer's run, it will wait for ackQueue's size == 0, then add the last packet to ackQueue, then wait for ackQueue.size == 0 again and finally close the
responseProcessor and the blockStream.

Xiaohe,

I assume you are referring to this code:

          if (one.isLastPacketInBlock()) {
            // wait for all data packets have been successfully acked
            synchronized (dataQueue) {
              while (!streamerClosed && !hasError &&
                  ackQueue.size() != 0 && dfsClient.clientRunning) {
                try {
                  // wait for acks to arrive from datanodes
                  dataQueue.wait(1000);
                } catch (InterruptedException  e) {
                  DFSClient.LOG.warn("Caught exception ", e);
                }
              }
            }
            if (streamerClosed || hasError || !dfsClient.clientRunning) {
              continue;
            }
            stage = BlockConstructionStage.PIPELINE_CLOSE;
          }

This is just making sure that a block is completely written. There may be many packets in a block.

Charles

Reply via email to