Hi Dan-

It is a single client, with no other clients connected to riak.

It is really deletes that seem to be missed/lagging, and deletes don't take a 
vector clock?

Pseudocode is something like this...all keys and values are raw bytes wrapped 
in protobuf bytestring

Store(bucket,key,value)
Store(bucket,key2,key)
Fetch(bucket,key) // properly returns value
Delete(bucket,key)
Delete(bucket,key2)
Fetch(bucket,key) //properly returns null
Store(bucket,key,value)
Store(bucket,key2,key)
Fetch(bucket,key) // properly returns value
Delete(bucket,key)
Delete(bucket,key2)
Fetch(bucket,key) // !!!incorrectly returns value instead of null

The correct value is returned if I have breakpoints set that effectively give 
some wait time between the last delete and fetch.


I will code up a simplified test case tonight, maybe it will reveal an error on 
my side.

Thanks
Sent from my iPhone

On Oct 13, 2010, at 9:21 PM, Dan Reverri <d...@basho.com> wrote:

> Hi SC,
> 
> If a single client is writing to a single key with read and write quorums 
> your reads should reflect your writes. Are other clients updating the same 
> key concurrently? Are you updating the values with the correct vclock? Would 
> it be possible to provide a minimal test case that reproduces the issue?
> 
> Thanks,
> Dan
> 
> Daniel Reverri
> Developer Advocate
> Basho Technologies, Inc.
> d...@basho.com
> 
> 
> On Wed, Oct 13, 2010 at 5:07 PM, scott clasen <scott.cla...@gmail.com> wrote:
> Hi-
> 
>    I am writing a Riak backend for Akka's persistent data structures,
> and am running into what looks like behavior similar to what is
> mentioned in this ticket.
> 
> https://issues.basho.com/show_bug.cgi?id=260  which says--->
> 
> In cases where objects are rapidly updated and deleted, unpredictable behavior
> seems to occur due to race conditions.  Exposing tombstones to the client when
> allow_mult is true would allow resolution of the conflict that is normally
> hidden from the user. This will require the client who is interested in
> resolving delete+update conflicts to submit the vclock when performing the
> delete.
> 
> This is on a single riak instance 0.13.0, on OS X, installed via 'brew
> install riak', w
> 
> The behavior I am seeing is in a single threaded test that rapidly
> inserts, updates and deletes the value for a single key in a single
> bucket.
> 
> This is using the protobuf interface, via the riak-java-pb-client.
> All reads use quorum, and all writes use quorum and durableWrite
> quorum as well.
> 
> Running the tests outside the debugger cause test failures because the
> reads are not seeing the latest write/delete
> 
> When I run the tests in debug mode, with breakpoints set to watch the
> values returned from riak, the tests pass.
> 
> This makes me think there is some race condition like the one
> mentioned in ticket 260 occurring. Although it could certainly be user
> error  ;).
> 
> Other than the default config, I have tried fiddling with
> last_write_wins and allow_multi but no combination of those seem to
> help.
> 
> Is there anything I am missing, or any configuration or code that can
> give me a consistent view?
> 
> Thanks
> SC
> 
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to