That's terrific. We're very familiar with InnoDBs buffer pool and
that's exactly the kind of control we're looking for. With the
InnoStore backend is the crash recovery/durability similar to InnoDB
under MySQL?

-J

On Wed, May 26, 2010 at 8:13 PM, Sean Cribbs <s...@basho.com> wrote:
> Riak's usage of memory primarily depends on the backend you choose.  
> Innostore, for example, has a configurable buffer pool (cache) which can help 
> you limit the memory footprint.  Bitcask, our most recently released backend, 
> keeps only a hash mapping keys to file/offset in memory (with the value only 
> on disk), so you could store thousands of keys per node without noticing much.
>
> Outside of the backend, Riak generally only needs enough RAM to have copies 
> of objects in transit -- either being written or read.  If you see it using 
> too much RAM, there are some flags to the Erlang VM that can be tweaked.
>
> Sean Cribbs <s...@basho.com>
> Developer Advocate
> Basho Technologies, Inc.
> http://basho.com/
>
> On May 26, 2010, at 6:23 PM, Jason J. W. Williams wrote:
>
>> Hi,
>>
>> We have a couple of projects we want to start small and to this point
>> we've been considering MongoDB or Cassandra. Mongo's main drawback for
>> us is it's extensive use of mmap, which can make it a bad neighbor
>> vis-a-vis RAM usage if it has to co-exist with other parts of our
>> stack. Cassandra's has it's own drawbacks for our use cases.
>>
>> Riak looks very interesting, but we're curious about it's memory
>> profile. That is what drives memory consumption in Riak, is it
>> possible to limit the memory consumption, and how does Riak behave
>> when memory exhaustion occurs?
>>
>> -J
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to