I ran into this same issue when I was confused by my own initial
benchmarking results.  For my purposes, I actually shut down the riak
nodes, delete the data directories, then restart Riak between tests.

I think using a different backend is a good idea that I hadn't considered,
unless using the different backend will create performance issues (or
non-issues) that make those tests less indicative of your production usage
of the platform.  I don't know enough to be able to tell you what those
different performance characteristics would be.

Yes, this means I have to reload all of my data between tests.  For me,
this is pretty optimized at this point so it's not a big deal.  YMMV.

I wonder about the validity of my approach, too.  Is it too controlled?  At
some point, I'll want to understand the performance of Riak under scenarios
with more variables, but I personally don't need to be there yet.  In my
first test, I threw way too much data at Riak and it didn't seem to matter
what I was tweaking, performance was pretty unpredictable and my
measurements weren't helpful to me.

On Fri, Sep 7, 2012 at 7:02 AM, Sean Cribbs <s...@basho.com> wrote:

> If it's only for your test suite, use a local memory-only node via the
> Ripple::TestServer. If you configure it correctly (there's a bundled
> Rails generator that helps), it will delete all the contents of Riak
> in one fell-swoop after or before each test/example.
>
> On Fri, Sep 7, 2012 at 2:49 AM, Brad Heller <b...@cloudability.com> wrote:
> > Hey all,
> >
> > So we're rolling out Riak more and more and so far, so good. In fact
> we're
> > starting to accumulate a pretty sizable test suite of our data access
> layer.
> > To support this we've written some code that helps clean out buckets
> between
> > test runs. We're using Ruby (ripple + ruby-risk-client, obviously) so we
> > just monkey patch the bucket object and detect if it was used or not.
> >
> > If it was used during a test, we iterate over all the keys and delete
> each
> > one. We're cognizant of the risks in doing this for large clusters, but
> in
> > our test enviro the number of documents per test is pretty small and the
> > operations themselves are quite fast.
> >
> > The problem we're having, though, is that when we run the full suite (or
> a
> > large subset of the suite) all at once the churn in Riak seems to be so
> > quick that sometimes Riak isn't quite in the state that we expect it to
> be
> > in when the test runs! For instance, we have a test that checks that the
> > right number of linked documents are created when a given object is
> saved.
> > If we run the test by itself everything works fine--the expectation that
> 10
> > documents are created indeed checks out. If we run this test as part of a
> > large suite of tests, the expectation fails and sometimes 11 documents
> or 12
> > documents appear (we check by counting the keys in the linked bucket).
> >
> > I would think that this has something to do with Riak's eventually
> > consistent delete operation, but wanted to tap in to the brain trust: Is
> > there any tuning we can do in test to prevent this?
> >
> > Thanks,
> >
> > Brad Heller | Engineering Lead | Cloudability.com | 541-231-1514 |
> Skype:
> > brad.heller | @bradhe | @cloudability
> >
> > We're hiring! http://cloudability.com/jobs
> >
> >
> > _______________________________________________
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
>
>
> --
> Sean Cribbs <s...@basho.com>
> Software Engineer
> Basho Technologies, Inc.
> http://basho.com/
>
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>



-- 
*Pinney H. Colton*
*Bitwise Data, LLC*
+1.763.220.0793 (o)
+1.651.492.0152 (m)
http://www.bitwisedata.com
*
*
*Be Smart!  Get Bitwise.*
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to