I still haven't really gotten to the bottom of the best way to do this
(short of paying for
MDC<http://www.google.com.au/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCIQFjAA&url=http%3A%2F%2Fbasho.com%2Fblog%2Ftechnical%2F2012%2F08%2F08%2FMDC-Replication-in-Riak-1p2%2F&ei=0c6iUIj-Ga6UiAfKlIGQAg&usg=AFQjCNG6dhEFCB1kc98huc5PnoeXdq8hNw>
):

http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-October/009951.html

Previously, I've used backup/restore for situations like this, but our
backup has now grown to around 100GB - so it has become impractical.

Shane, in your maintenance window could you:
* create your new cluster
* stop any new data being added to the old cluster
* run a riak-admin backup
* run a riak-admin restore into the new one

The maintenance window here saves you a lot of trouble... Unfortunately,
most people won't get one ;)

Cheers
Matt



On 14 November 2012 09:44, Martin Woods <mw2...@gmail.com> wrote:

> Hi Tom
>
> I'd be very interested to know if Shane's approach should work, or if you
> know of any good reason why that approach would cause issues.
>
> Also, aren't there several very real business use cases here that users of
> Riak will inevitably encounter, and must be able to satisfy? Shane mentions
> two use cases below: creation of a test environment using a copy of data
> from a production cluster; and the migration of data within one cloud
> provider from one set of systems to a distinct, separate set of systems.
>
> To add to this, what about the case where a Riak customer needs to move
> from one cloud provider to another? How does this customer take his data
> with him?
>
> All of the above cases require that a separate cluster be spun up from the
> original cluster, with different names and IP addresses for the Riak nodes
>  involved in the cluster.
>
> None of these use cases are satisfied by using the riak-admin cluster
> command.
>
> It seemed that this was the purpose of the reip command, but if Basho is
> poised to deprecate this command, and indeed no longer recommends its use,
> how are the previous cases supported? Surely these are important scenarios
> for users of Riak, and therefore Basho?
>
> At one level, it seems it should be entirely possible to simply copy the
> data directory from each Riak node and tell Riak that the node names and IP
> addresses have changed (reip!). So what's the problem with doing this?
>
> Regards,
> Martin.
>
>
> On 13 November 2012 17:16, Thomas Santero <tsant...@basho.com> wrote:
>
>> Hi Shane,
>>
>> I'm sorry for the delay on this. Over the weekend I was working to
>> replicate your setup so I can answer your question from experience. Alas,
>> time got the best of me and I have not yet finished.
>>
>> That said, I'm inclined to suggest upgrading riak on your current cluster
>> first and then using riak-admin replace to move off of the VM's and onto
>> metal.
>>
>> * In this scenario, do a rolling upgrade (including making backups) of
>> the current cluster.
>> * Install riak onto the new machines
>> * join the first machine to the cluster
>> * use riak-admin replace to replace one of the old nodes with the new node
>> * wait for ring-ready, then repeat for the other nodes.
>>
>> Tom
>>
>>
>> On Tue, Nov 13, 2012 at 11:59 AM, Shane McEwan <sh...@mcewan.id.au>wrote:
>>
>>> Anyone? Beuller? :-)
>>>
>>> Installing Riak 1.1.1 on the new nodes, copying the data directories
>>> from the old nodes, issuing a "reip" on all the new nodes, starting up,
>>> waiting for partition handoffs to complete, shutting down, upgrading to
>>> 1.2.1 and starting up again got us to where we want to be. But this is not
>>> very convenient.
>>>
>>> What do I do when I come to creating our test environment where I'll be
>>> wanting to copy production data onto the test nodes on a regular basis? At
>>> that point I won't have the "luxury" of downgrading to 1.1.1 to have a
>>> working "reip" command.
>>>
>>> Surely there's gotta be an easier way to spin up a new cluster with new
>>> names and IPs but with old data?
>>>
>>> Shane.
>>>
>>>
>>> On 08/11/12 21:10, Shane McEwan wrote:
>>>
>>>> G'day!
>>>>
>>>> Just to add to the list of people asking questions about migrating to
>>>> 1.2.1 . . .
>>>>
>>>> We're about to migrate our 4 node production Riak database from 1.1.1 to
>>>> 1.2.1. At the same time we're also migrating from virtual machines to
>>>> physical machines. These machines will have new names and IP addresses.
>>>>
>>>> The process of doing rolling upgrades is well documented but I'm unsure
>>>> of the correct procedure for moving to an entirely new cluster.
>>>>
>>>> We have the luxury of a maintenance window so we don't need to keep
>>>> everything running during the migration. Therefore the current plan is
>>>> to stop the current cluster, copy the Riak data directories to the new
>>>> machines and start up the new cluster. The hazy part of the process is
>>>> how we "reip" the database so it will work in the new cluster.
>>>>
>>>> We've tried using the "riak-admin reip" command but were left with one
>>>> of our nodes in "(legacy)" mode according to "riak-admin member-status".
>>>> From an earlier E-Mail thread[1] it seems like "reip" is deprecated and
>>>> we should be doing a "cluster force replace" instead.
>>>>
>>>> So, would the new procedure be the following?
>>>>
>>>> 1. Shutdown old cluster
>>>> 2. Copy data directory
>>>> 3. Start new cluster (QUESTION: The new nodes don't own any of the
>>>> partitions in the data directory. What does it do?) (QUESTION: The new
>>>> nodes won't be part of a cluster yet. Do I need to "join" them before I
>>>> can do any of the following commands? Or do I just put all the joins and
>>>> force-replace commands into the same plan and commit it all together?)
>>>> 3. Issue "riak-admin cluster force-replace old-node1 new-node1"
>>>> (QUESTION: Do I run this command just on "new-node1" or on all nodes?)
>>>> 4. Issue "force-replace" commands for the remaining three nodes.
>>>> 5. Issue a "cluster plan" and "cluster commit" to commit the changes.
>>>> 6. Cross fingers.
>>>>
>>>> In my mind the "replace" and/or "force-replace" commands are something
>>>> we would use it we had a failed node and needed to bring a spare online
>>>> to take over. It doesn't feel like something you would do if you don't
>>>> already have a cluster in place and are needing to "replace" ALL nodes.
>>>>
>>>> Of course, we want to test this procedure before doing it for real. What
>>>> are the risks of doing the above procedure while the old cluster is
>>>> still running? While the new nodes are on a segregated network and
>>>> shouldn't be able to contact the old nodes what would happen if we did
>>>> the above and found the network wasn't as segregated as we originally
>>>> thought? Would the new nodes start trying to communicate with the old
>>>> nodes before the "force-replace" can take effect? Or, because all the
>>>> cluster changes are atomic there won't be any risk of that?
>>>>
>>>> Sorry for all the questions. I'm just trying to get a clear procedure
>>>> for moving an entire cluster to new hardware and hopefully this thread
>>>> will help other people in the future.
>>>>
>>>> Thanks in advance!
>>>>
>>>> Shane.
>>>>
>>>> [1] 
>>>> http://comments.gmane.org/**gmane.comp.db.riak.user/8418<http://comments.gmane.org/gmane.comp.db.riak.user/8418>
>>>>
>>>>
>>>> ______________________________**_________________
>>>> riak-users mailing list
>>>> riak-users@lists.basho.com
>>>> http://lists.basho.com/**mailman/listinfo/riak-users_**lists.basho.com<http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
>>>>
>>>
>>> ______________________________**_________________
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/**mailman/listinfo/riak-users_**lists.basho.com<http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
>>>
>>
>>
>>
>> --
>> @tsantero <https://twitter.com/#!/tsantero>
>> Technical Evangelist
>> Basho Technologies
>> 347-571-3995
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to