Hi Edgar,

You don't need to compress your objects, LevelDB will do that for you, and if you are using Protocol Buffers it will compress the network traffic for you too without compromising performance or any CPU bound process. There isn't anything special about LevelDB config, I would suggest you to try the Riak defaults which will work for 95%+ of the cases, start from there and see how that works for you.

HTH,

Guido.

On 10/07/13 10:03, Edgar Veiga wrote:
Hi Guido.

Thanks for your answer!

Bitcask it's not an option due to the amount of ram needed.. We would need a lot more of physical nodes so more money spent...

Instead we're using less machines with SSD disks to improve elevelDB performance.

Best regards



On 10 July 2013 09:58, Guido Medina <guido.med...@temetra.com <mailto:guido.med...@temetra.com>> wrote:

    Well, I rushed my answer before, if you want performance, you
    probably want Bitcask, if you want compression then LevelDB, the
    following links should help you decide better:

    http://docs.basho.com/riak/1.2.0/tutorials/choosing-a-backend/Bitcask/
    http://docs.basho.com/riak/1.2.0/tutorials/choosing-a-backend/LevelDB/

    Or multi, use one as default and then the other for specific buckets:

    http://docs.basho.com/riak/1.2.0/tutorials/choosing-a-backend/Multi/

    HTH,

    Guido.



    On 10/07/13 09:53, Guido Medina wrote:
    Then you are better off with Bitcask, that will be the fastest in
    your case (no 2i, no searches, no M/R)

    HTH,

    Guido.

    On 10/07/13 09:49, Edgar Veiga wrote:
    Hello all!

    I have a couple of questions that I would like to address all of
    you guys, in order to start this migration the best as possible.

    Context:
    - I'm responsible for the migration of a pure key/value store
    that for now is being stored on memcacheDB.
    - We're serializing php objects and storing them.
    - The total size occupied it's ~2TB.

    - The idea it's to migrate this data to a riak cluster with
    elevelDB backend (starting with 6 nodes, 256 partitions. This
    thing is scaling very fast).
    - We only need to access the information by key. *We won't need
    neither map/reduces, searches or secondary indexes*. It's a pure
    key/value store!

    My questions are:
    - Do you have any riak fine tunning tip regarding this use case
    (due to the fact that we will only use the key/value
    capabilities of riak)?
    - It's expected that those 2TB would be reduced due to the
    levelDB compression. Do you think we should compress our objects
    to on the client?

    Best regards,
    Edgar Veiga


    _______________________________________________
    riak-users mailing list
    riak-users@lists.basho.com  <mailto:riak-users@lists.basho.com>
    http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



    _______________________________________________
    riak-users mailing list
    riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
    http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to