Harsh J created HDFS-4257:
-----------------------------

             Summary: The ReplaceDatanodeOnFailure policies could have a 
forgiving option
                 Key: HDFS-4257
                 URL: https://issues.apache.org/jira/browse/HDFS-4257
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: hdfs-client
    Affects Versions: 2.0.2-alpha
            Reporter: Harsh J
            Priority: Minor


Similar question has previously come over HDFS-3091 and friends, but the 
essential problem is: "Why can't I write to my cluster of 3 nodes, when I just 
have 1 node available at a point in time.".

The policies cover the 4 options, with {{Default}} being default:

{{Disable}} -> Disables the whole replacement concept by throwing out an error.
{{Never}} -> Never replaces a DN upon pipeline failures (not too desirable in 
many cases).
{{Default}} -> Replace based on a few conditions, but whose minimum never 
touches 1. We always fail if only one DN remains and none others can be added.
{{Always}} -> Replace no matter what. Fail if can't replace.

Would it not make sense to have an option similar to Always/Default, where 
despite _trying_, if it isn't possible to have > 1 DN in the pipeline, do not 
fail. I think that is what the former write behavior was, and what fit with the 
minimum replication factor allowed value.

Why is it grossly wrong to pass a write from a client for a block with just 1 
remaining replica in the pipeline (the minimum of 1 grows with the replication 
factor demanded from the write), when replication is taken care of immediately 
afterwards? How often have we seen missing blocks arise out of allowing this + 
facing a big rack(s) failure or so?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to