I think I've found that rare case when I don't get deletedvlock before
'put' in my code.

Sorry for bothering everyone :)


On 21 May 2014 14:15, Oleksiy Krivoshey <oleks...@gmail.com> wrote:

> I think its a different issue and might be my own misunderstanding:
>
> The actual order of operations is (all same key):
>
> 1. write
> 3. read
> 4. delete
> 5. write
> 6. read - failed
>
> So it might be tombstone problem. However I always do 'get' with
> 'deletedvclock: true' before 'put' or 'delete' and provide a vclock.
>
>
> On 21 May 2014 12:10, Oleksiy Krivoshey <oleks...@gmail.com> wrote:
>
>> Hi,
>>
>> I have a quite rare problem of lost data in Riak 2.0 beta1. I can hardly
>> replicate it but when it happens it looks like this order of operations:
>>
>> (All operations are using bucket types).
>>
>> 1. write some data (KEY1) - ok
>> 2. read that data (KEY1) - ok
>> 3. message appears in riak console.log:
>> 2014-05-21 08:15:03.328 [info]
>> <0.15793.157>@riak_kv_exchange_fsm:key_exchange:256 Repaired 1 keys during
>> active anti-entropy exchange of
>> {1450083655789255239155218544960687058564870569984,3} between {0,'
>> riak@10.0.1.1'} and {1450083655789255239155218544960687058564870569984,'
>> riak@10.0.1.3'}
>> 4. read data (KEY1) - ok
>> 5. write new data (KEY1) - ok
>> 6. read data (KEY1) - no such key
>>
>> All happens within 10-20 seconds.
>>
>> Can someone give any hint on this?
>>
>> Running riak2.0.0_beta1 on Ubuntu 14.04
>>
>> Bucket type:
>>
>> {"props":{"backend":"fs_chunks","allow_mult":"false"}}
>>
>> fs_chunks backend:
>>
>> {<<"fs_chunks">>, riak_kv_bitcask_backend, [
>>         {data_root, "/var/lib/riak/fs_chunks"}
>> ]}
>>
>> Thanks!
>>
>> --
>> Oleksiy Krivoshey
>>
>
>
>
> --
> Oleksiy Krivoshey
>



-- 
Oleksiy Krivoshey
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to