Thanks everyone for the replies. Seems like there is no easy way to handle 
this. It's very surprising that no one seems to have solved such a common use 
case.
-- Drew

On Jan 6, 2012, at 2:11 PM, Bryce Allen wrote:

> That's a good question, and I'm not sure - I'm fairly new to both ZK
> and Cassandra. I found this wiki page:
> http://wiki.apache.org/hadoop/ZooKeeper/FailureScenarios
> and I think the lock recipe still works, even if a stale read happens.
> Assuming that wiki page is correct.
> 
> There is still subtlety to locking with ZK though, see (Locks based
> on ephemeral nodes) from the zk mailing list in October:
> http://mail-archives.apache.org/mod_mbox/zookeeper-user/201110.mbox/thread?0
> 
> -Bryce
> 
> On Fri, 6 Jan 2012 13:36:52 -0800
> Drew Kutcharian <d...@venarc.com> wrote:
>> Bryce, 
>> 
>> I'm not sure about ZooKeeper, but I know if you have a partition
>> between HazelCast nodes, than the nodes can acquire the same lock
>> independently in each divided partition. How does ZooKeeper handle
>> this situation?
>> 
>> -- Drew
>> 
>> 
>> On Jan 6, 2012, at 12:48 PM, Bryce Allen wrote:
>> 
>>> On Fri, 6 Jan 2012 10:03:38 -0800
>>> Drew Kutcharian <d...@venarc.com> wrote:
>>>> I know that this can be done using a lock manager such as ZooKeeper
>>>> or HazelCast, but the issue with using either of them is that if
>>>> ZooKeeper or HazelCast is down, then you can't be sure about the
>>>> reliability of the lock. So this potentially, in the very rare
>>>> instance where the lock manager is down and two users are
>>>> registering with the same email, can cause major issues.
>>> 
>>> For most applications, if the lock managers is down, you don't
>>> acquire the lock, so you don't enter the critical section. Rather
>>> than allowing inconsistency, you become unavailable (at least to
>>> writes that require a lock).
>>> 
>>> -Bryce
>> 

Reply via email to