The easiest way to do this is replacing one node at a time by using rsync.
I don't know why it has to be more complicated than copying data to a new
machine and replacing it in the cluster.   Bringing up a new DC with
snapshots is going to be a nightmare in comparison.

On Wed, Feb 21, 2018 at 8:16 AM Carl Mueller <carl.muel...@smartthings.com>
wrote:

> DCs can be stood up with snapshotted data.
>
>
> Stand up a new cluster with your old cluster snapshots:
>
>
> https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_snapshot_restore_new_cluster.html
>
> Then link the DCs together.
>
> Disclaimer: I've never done this in real life.
>
> On Wed, Feb 21, 2018 at 9:25 AM, Nitan Kainth <nitankai...@gmail.com>
> wrote:
>
>> New dc will be faster but may impact cluster performance due to streaming.
>>
>> Sent from my iPhone
>>
>> On Feb 21, 2018, at 8:53 AM, Leena Ghatpande <lghatpa...@hotmail.com>
>> wrote:
>>
>> We do use LOCAL_ONE and LOCAL_Quorum currently. But these 8 nodes need to
>> be in 2 different DC< so we would end up create additional 2 new DC and
>> dropping 2.
>>
>> are there any advantages on adding DC over one node at a time?
>>
>>
>> ------------------------------
>> *From:* Jeff Jirsa <jji...@gmail.com>
>> *Sent:* Wednesday, February 21, 2018 1:02 AM
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: Best approach to Replace existing 8 smaller nodes in
>> production cluster with New 8 nodes that are bigger in capacity, without a
>> downtime
>>
>> You add the nodes with rf=0 so there’s no streaming, then bump it to rf=1
>> and run repair, then rf=2 and run repair, then rf=3 and run repair, then
>> you either change the app to use local quorum in the new dc, or reverse the
>> process by decreasing the rf in the original dc by 1 at a time
>>
>> --
>> Jeff Jirsa
>>
>>
>> > On Feb 20, 2018, at 8:51 PM, Kyrylo Lebediev <kyrylo_lebed...@epam.com>
>> wrote:
>> >
>> > I'd say, "add new DC, then remove old DC" approach is more risky
>> especially if they use QUORUM CL (in this case they will need to change CL
>> to LOCAL_QUORUM, otherwise they'll run into a lot of blocking read repairs).
>> > Also, if there is a chance to get rid of streaming, it worth doing as
>> usually direct data copy (not by means of C*) is more effective and less
>> troublesome.
>> >
>> > Regards,
>> > Kyrill
>> >
>> > ________________________________________
>> > From: Nitan Kainth <nitankai...@gmail.com>
>> > Sent: Wednesday, February 21, 2018 1:04:05 AM
>> > To: user@cassandra.apache.org
>> > Subject: Re: Best approach to Replace existing 8 smaller nodes in
>> production cluster with New 8 nodes that are bigger in capacity, without a
>> downtime
>> >
>> > You can also create a new DC and then terminate old one.
>> >
>> > Sent from my iPhone
>> >
>> >> On Feb 20, 2018, at 2:49 PM, Kyrylo Lebediev <kyrylo_lebed...@epam.com>
>> wrote:
>> >>
>> >> Hi,
>> >> Consider using this approach, replacing nodes one by one:
>> https://mrcalonso.com/2016/01/26/cassandra-instantaneous-in-place-node-replacement/
>>
>> <https://mrcalonso.com/2016/01/26/cassandra-instantaneous-in-place-node-replacement/>
>> Cassandra instantaneous in place node replacement | Carlos ...
>> <https://mrcalonso.com/2016/01/26/cassandra-instantaneous-in-place-node-replacement/>
>> mrcalonso.com
>> At some point everyone using Cassandra faces the situation of having to
>> replace nodes. Either because the cluster needs to scale and some nodes are
>> too small or ...
>>
>> >>
>> >> Regards,
>> >> Kyrill
>> >>
>> >> ________________________________________
>> >> From: Leena Ghatpande <lghatpa...@hotmail.com>
>> >> Sent: Tuesday, February 20, 2018 10:24:24 PM
>> >> To: user@cassandra.apache.org
>> >> Subject: Best approach to Replace existing 8 smaller nodes in
>> production cluster with New 8 nodes that are bigger in capacity, without a
>> downtime
>> >>
>> >> Best approach to replace existing 8 smaller 8 nodes in production
>> cluster with New 8 nodes that are bigger in capacity without a downtime
>> >>
>> >> We have 4 nodes each in 2 DC, and we want to replace these 8 nodes
>> with new 8 nodes that are bigger in capacity in terms of RAM,CPU and
>> Diskspace without a downtime.
>> >> The RF is set to 3 currently, and we have 2 large tables with upto
>> 70Million rows
>> >>
>> >> What would be the best approach to implement this
>> >>    - Add 1 New Node and Decomission 1 Old node at a time?
>> >>    - Add all New nodes to the cluster, and then decommission old nodes
>> ?
>> >>        If we do this, can we still keep the RF=3 while we have 16
>> nodes at a point in the cluster before we start decommissioning?
>> >>   - How long do we wait in between adding a Node or decomissiing to
>> ensure the process is complete before we proceed?
>> >>   - Any tool that we can use to monitor if the add/decomission node is
>> done before we proceed to next
>> >>
>> >> Any other suggestion?
>> >>
>> >>
>> >> ---------------------------------------------------------------------
>> >> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> >> For additional commands, e-mail: user-h...@cassandra.apache.org
>> >>
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> > For additional commands, e-mail: user-h...@cassandra.apache.org
>> >
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> > For additional commands, e-mail: user-h...@cassandra.apache.org
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>

Reply via email to