Guido, we'r not using Java and that won't be an option.

The technology stack is php and/or node.js

Thanks anyway :)

Best regards


On 10 July 2013 10:35, Edgar Veiga <edgarmve...@gmail.com> wrote:

> Hi Damien,
>
> We have ~1100000000 keys and we are using ~2TB of disk space.
> (The average object length will be ~2000 bytes).
>
> This is a lot to fit in memory (We have bad past experiencies with
> couchDB...).
>
> Thanks for the rest of the tips!
>
>
> On 10 July 2013 10:13, damien krotkine <dkrotk...@gmail.com> wrote:
>
>>
>> ( first post here, hi everybody... )
>>
>> If you don't need MR, 2i, etc, then BitCask will be faster. You just need
>> to make sure all your keys fit in memory, which should not be a problem.
>> How many keys do you have and what's their average length ?
>>
>> About the values,you can save a lot of space by choosing an appropriate
>> serialization. We use Sereal[1] to serialize our data, and it's small
>> enough that we don't need to compress it further (it can automatically use
>> snappy to compress further). There is a php client [2]
>>
>> If you use leveldb, it can compress using snappy, but I've been a bit
>> disappointed by snappy, because it didn't work well with our data. If you
>> serialize your php object as verbose string (I don't know what's the usual
>> way to serialize php objects), then you should probably benchmark different
>> compressions algorithms on the application side.
>>
>>
>> [1]: https://github.com/Sereal/Sereal/wiki/Sereal-Comparison-Graphs
>> [2]: https://github.com/tobyink/php-sereal/tree/master/PHP
>>
>> On 10 July 2013 10:49, Edgar Veiga <edgarmve...@gmail.com> wrote:
>>
>>>  Hello all!
>>>
>>> I have a couple of questions that I would like to address all of you
>>> guys, in order to start this migration the best as possible.
>>>
>>> Context:
>>> - I'm responsible for the migration of a pure key/value store that for
>>> now is being stored on memcacheDB.
>>> - We're serializing php objects and storing them.
>>> - The total size occupied it's ~2TB.
>>>
>>> - The idea it's to migrate this data to a riak cluster with elevelDB
>>> backend (starting with 6 nodes, 256 partitions. This thing is scaling very
>>> fast).
>>> - We only need to access the information by key. *We won't need neither
>>> map/reduces, searches or secondary indexes*. It's a pure key/value
>>> store!
>>>
>>> My questions are:
>>> - Do you have any riak fine tunning tip regarding this use case (due to
>>> the fact that we will only use the key/value capabilities of riak)?
>>> - It's expected that those 2TB would be reduced due to the levelDB
>>> compression. Do you think we should compress our objects to on the client?
>>>
>>> Best regards,
>>> Edgar Veiga
>>>
>>> _______________________________________________
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to