On 10 July 2013 11:03, Edgar Veiga <edgarmve...@gmail.com> wrote:

> Hi Guido.
>
> Thanks for your answer!
>
> Bitcask it's not an option due to the amount of ram needed.. We would need
> a lot more of physical nodes so more money spent...
>

Why is it not an option?

If you use Bitcask, then each node needs to store its keys in memory. It's
usually not a lot. In a precedent email I asked you the average lenght of
*keys*, but you gave us the average length of *values* :)

We have 1 billion keys and fits on a 5 nodes Ring. ( check out
http://docs.basho.com/riak/1.2.0/references/appendices/Bitcask-Capacity-Planning/).
Our bucket names are 1 letter, our keys are 10 chars long.

What does a typical key look like ? Also, what are you using to serialize
your php objects? Maybe you could paste a typical value somewhere as well

Damien


>
> Instead we're using less machines with SSD disks to improve elevelDB
> performance.
>
> Best regards
>
>
>
> On 10 July 2013 09:58, Guido Medina <guido.med...@temetra.com> wrote:
>
>>  Well, I rushed my answer before, if you want performance, you probably
>> want Bitcask, if you want compression then LevelDB, the following links
>> should help you decide better:
>>
>> http://docs.basho.com/riak/1.2.0/tutorials/choosing-a-backend/Bitcask/
>> http://docs.basho.com/riak/1.2.0/tutorials/choosing-a-backend/LevelDB/
>>
>> Or multi, use one as default and then the other for specific buckets:
>>
>> http://docs.basho.com/riak/1.2.0/tutorials/choosing-a-backend/Multi/
>>
>> HTH,
>>
>> Guido.
>>
>>
>>
>> On 10/07/13 09:53, Guido Medina wrote:
>>
>> Then you are better off with Bitcask, that will be the fastest in your
>> case (no 2i, no searches, no M/R)
>>
>> HTH,
>>
>> Guido.
>>
>> On 10/07/13 09:49, Edgar Veiga wrote:
>>
>> Hello all!
>>
>>  I have a couple of questions that I would like to address all of you
>> guys, in order to start this migration the best as possible.
>>
>>  Context:
>> - I'm responsible for the migration of a pure key/value store that for
>> now is being stored on memcacheDB.
>> - We're serializing php objects and storing them.
>> - The total size occupied it's ~2TB.
>>
>>  - The idea it's to migrate this data to a riak cluster with elevelDB
>> backend (starting with 6 nodes, 256 partitions. This thing is scaling very
>> fast).
>> - We only need to access the information by key. *We won't need neither
>> map/reduces, searches or secondary indexes*. It's a pure key/value store!
>>
>>  My questions are:
>> - Do you have any riak fine tunning tip regarding this use case (due to
>> the fact that we will only use the key/value capabilities of riak)?
>>  - It's expected that those 2TB would be reduced due to the levelDB
>> compression. Do you think we should compress our objects to on the client?
>>
>>  Best regards,
>> Edgar Veiga
>>
>>
>> _______________________________________________
>> riak-users mailing 
>> listriak-users@lists.basho.comhttp://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>>
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to