On August 13, 2013 10:20:48 PM Brady Wetherington wrote:
> One thing that I *think* I've figured out is that the number of "how many
> replicas can you lose and stay up" is actually n-w for writes, and n-r for
> reads -
> 
> So with n=3 and r=2 and w=2, the loss of two replicas due to AZ failure
> means that I still *have* my data ("durability") but I might lose _access_
> to it ("availability") for a little bit. And with that weird feature that
> Riak has (the feature's name escapes me for now?) I might even be able to
> write new data if my cluster figures out that the downed nodes are actually
> down; I think it just stores the writes on the remaining boxen, and
> eventually it gets distributed back once the nodes come back. Neat stuff.
> 
Actually, that is sort of true.  If you lose two nodes, when you request a 
read, it will initially fail as it can only preform the read against one node, 
and the two fall over vnodes won't have the data.  However, the cluster will 
recognize that there is data missing in the fallover vnodes, and initiate 
read-repair.  So the next read will in fact work just fine.  If you build your 
app to assume reads may transiently fail, then you shouldn't have an issue.
Write will also continue to work in the same way (like you mentioned).
-- 
Matthew

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to