Please correct me if I'm wrong, but I think the right answer is doing a GET
first so you have a vector clock that is after the delete.  Then you should
be able to be sure your new write wins in any sibling resolution.

Gabe



On Mon, Jul 15, 2013 at 8:59 PM, Matthew Dawson <matt...@mjdsystems.ca>wrote:

> On July 15, 2013 02:16:47 PM Gabriel Littman wrote:
> > Hi,
> >
> > Another posibility is to create a new bucket for each test run.
> >
> > Gabe
> >
> Also a good idea, but not easily implemented against my current library.
>  I've
> got a solution in using random addition to keys that works for now.
>
> My only problem is that this doesn't solve my underlying issues regarding
> keys
> disappearing.  Is the only way to ensure keys don't disappear is to never
> reuse a key?  I'd understand if this was caused by two random processes
> interacting, but in this case there is a significant (>1s) time between two
> runs where keys overlap.
>
> Thanks for all the help,
> --
> Matthew
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to