Hi,

I'm testing HDFS (0.20.203) on my cluster.

Particularly, I'm conducting writing tests while a datanode is down, but not yet marked dead by namenode. If I write a file by client on the same machine where datanode process was killed, the task fails and these logs are printed:

/...INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection refused... /(for every retries)/

/Inoticed that client insists on writing on local datanode exhausting all retries (dfs.client.write.block.retries is set to 100) until writing aborts! Is this correct behavior? After N failed retries, it shouldn't contact another datanode to avoid writing abort?

Thanks

Gianni

Reply via email to