Mark,

We have 5 x ec2 m1.large nodes with 30GB data each, stored in LevelDB
backend.

I've already started building a copying tool, that will first run m/r on a
source cluster,
dumping the keys to a disk. Then several processes will run through the
keys and load
the data to the destination cluster.

Here is my initial version (still sequential):
https://github.com/tovbinm/riak-tools

-Matthew



On Mon, Jul 2, 2012 at 10:45 PM, Mark Phillips <m...@basho.com> wrote:

> Hey Matthew,
>
> Sorry for the delayed response.
>
> What are the specs of the hardware are you're running on? How much data do
> you have in your five nodes?
>
> Mark
>
> On Mon, Jul 2, 2012 at 11:00 AM, Matthew Tovbin <matt...@tovbin.com>wrote:
>
>> Hi,
>>
>> Do you have any suggestion for me?!
>>
>>
>> -Matthew
>>
>>
>>
>> On Tue, Jun 26, 2012 at 1:36 PM, Matthew Tovbin <matt...@tovbin.com>wrote:
>>
>>> Hi Basho,
>>>
>>> We have a running cluster of 5 nodes which accidentally was configured
>>> with a default setting
>>> "{ ring_creation_size: 64 }".
>>>
>>> As suggested in documentation we cannot scale this cluster more than 6
>>> machines,
>>> so the only option is to migrate to a new cluster with a
>>> larger ring_creation_size value.
>>>
>>> What is the best way to perform the migration?! And btw why 64 is still
>>> the default value?!
>>>
>>> Thanks,
>>>    -Matthew
>>>
>>>
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to