Depending on the n_val you have set for that bucket, Riak will store the 
objects n times on n different nodes. There are two other parameters you should 
know about, r and w. When writing, Riak will wait for w of the n nodes to 
finish the write before returning. When reading, Riak will wait for r of the n 
nodes to respond before returning. This is the basics of how Riak does fault 
and partition tolerance, i.e. if one node is down your cluster still functions, 
and the r and w vals define a sort of "majority vote" threshold to handle a 
split-brain problem.

Anyway, for your purposes you could set w=1 and r=3 for faster writes at the 
expense of potentially slower reads. I've never tried this (or any of the 
backends besides bitcask) so I don't know what you should expect.

As for bulk insert and preserving locality, I don't know of a way to do that 
with Riak except to batch your 1000 keys into a single object, identified by 
one key. As far as Riak is concerned, it's just a 12KB opaque object, which 
your application would need to always write and read all at once.

If you don't batch like that, you should look for a discussion on this mailing 
list from last week regarding capacity planning and very small objects. There's 
a bit of overhead associated with each object that will be significant for 
objects as small as 12 bytes. You could skip over the parts about Bitcask 
overhead...

On Saturday, May 28, 2011 at 9:59 AM, Michael McClain wrote:

> Thank you, Mike and Greg, for the response.
> I've just replied to the list.
> In my use case, I need to be able to write 100,000 keys per second. Where the 
> key is very small (12 bytes). And I always insert 1000 keys at once, in a 
> bulk insert. I would also like to preserve the locality of the keys inserted 
> at once (so that they stay always in the same node). Do you know if that is 
> possible?
> 
> Thank you
> 
> 2011/5/28 Mike Oxford <moxf...@gmail.com (mailto:moxf...@gmail.com)>
> >  With enough RAM you could just have it keep the whole thing in 
> > disk-cache...
> > 
> > -mox
> > 
> > 
> > On Fri, May 27, 2011 at 11:11 PM, Greg Nelson <gro...@dropcam.com 
> > (mailto:gro...@dropcam.com)> wrote:
> > > Michael,
> > > 
> > > You might want to check out riak_kv_ets_backend, 
> > > riak_kv_gb_trees_backend, and riak_kv_cache_backend.
> > > 
> > > http://wiki.basho.com/Configuration-Files.html
> > > 
> > > -Greg
> > > 
> > > On Friday, May 27, 2011 at 10:35 PM, Michael McClain wrote:
> > > 
> > > 
> > > 
> > > > Hi,
> > > > 
> > > > Is it possible to store the whole database in memory?
> > > > In a similar way as Redis does.
> > > > 
> > > > I'm really interested in the distributed map reduce done by riak 
> > > > ("bring processing to the data, instead of data to processors), but I 
> > > > need faster writes/reads that a memory-only database could provide.
> > > > In case you don't support memory-only storage (no disk touched / all 
> > > > keys and data fitting the memory in all nodes) yet, do you plan on 
> > > > implementing it?
> > > > 
> > > > Thank you,
> > > > Michael
> > > > _______________________________________________
> > > > riak-users mailing list
> > > > riak-users@lists.basho.com (mailto:riak-users@lists.basho.com)
> > > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> > > 
> > > 
> > > _______________________________________________
> > >  riak-users mailing list
> > > riak-users@lists.basho.com (mailto:riak-users@lists.basho.com)
> > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> > > 
> > 
> 
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com (mailto:riak-users@lists.basho.com)
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to