Hey all,

So we're rolling out Riak more and more and so far, so good. In fact we're 
starting to accumulate a pretty sizable test suite of our data access layer. To 
support this we've written some code that helps clean out buckets between test 
runs. We're using Ruby (ripple + ruby-risk-client, obviously) so we just monkey 
patch the bucket object and detect if it was used or not.

If it was used during a test, we iterate over all the keys and delete each one. 
We're cognizant of the risks in doing this for large clusters, but in our test 
enviro the number of documents per test is pretty small and the operations 
themselves are quite fast.

The problem we're having, though, is that when we run the full suite (or a 
large subset of the suite) all at once the churn in Riak seems to be so quick 
that sometimes Riak isn't quite in the state that we expect it to be in when 
the test runs! For instance, we have a test that checks that the right number 
of linked documents are created when a given object is saved. If we run the 
test by itself everything works fine--the expectation that 10 documents are 
created indeed checks out. If we run this test as part of a large suite of 
tests, the expectation fails and sometimes 11 documents or 12 documents appear 
(we check by counting the keys in the linked bucket).

I would think that this has something to do with Riak's eventually consistent 
delete operation, but wanted to tap in to the brain trust: Is there any tuning 
we can do in test to prevent this?

Thanks,

Brad Heller | Engineering Lead | Cloudability.com | 541-231-1514 | Skype: 
brad.heller | @bradhe | @cloudability

We're hiring! http://cloudability.com/jobs

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to