My understanding is you can't mix vnodes and regular nodes in the same DC.
Is it correct?


On Thu, Feb 6, 2014 at 2:16 PM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:

> Hello,
>
> My question is why would you need another DC to migrate to Vnodes? How
> about decommissioning each node in turn, changing the cassandra.yaml
> accordingly, delete the data and bring the node back in the cluster and let
> it bootstrap from the others?
>
> We did that recently with our demo cluster. Is that wrong in any way? The
> only think to take into consideration is the disk space I think. We are not
> using amazon, but I am not sure how would that be different for this
> particular issue.
>
> Thanks,
>
> Bill
> On 6 Feb 2014 16:34, "Alain RODRIGUEZ" <arodr...@gmail.com> wrote:
>
>> Glad it helps.
>>
>> Good luck with this.
>>
>> Cheers,
>>
>> Alain
>>
>>
>> 2014-02-06 17:30 GMT+01:00 Katriel Traum <katr...@google.com>:
>>
>>> Thank you Alain! That was exactly what I was looking for. I was worried
>>> I'd have to do a rolling restart to change the snitch.
>>>
>>> Katriel
>>>
>>>
>>>
>>> On Thu, Feb 6, 2014 at 1:10 PM, Alain RODRIGUEZ <arodr...@gmail.com>wrote:
>>>
>>>> Hi, we did this exact same operation here too, with no issue.
>>>>
>>>> Contrary to Paulo we did not modify our snitch.
>>>>
>>>> We simply added a "dc_suffix" in the property in
>>>> cassandra-rackdc.properties conf file for nodes in the new cluster :
>>>>
>>>> # Add a suffix to a datacenter name. Used by the Ec2Snitch and
>>>> Ec2MultiRegionSnitch
>>>>
>>>> # to append a string to the EC2 region name.
>>>>
>>>> dc_suffix=-xl
>>>>
>>>> So our new cluster DC is basically : eu-west-xl
>>>>
>>>> I think this is less risky, at least it is easier to do.
>>>>
>>>> Hope this help.
>>>>
>>>>
>>>> 2014-02-02 11:42 GMT+01:00 Paulo Ricardo Motta Gomes <
>>>> paulo.mo...@chaordicsystems.com>:
>>>>
>>>> We had a similar situation and what we did was first migrate the 1.1
>>>>> cluster to GossipingPropertyFileSnitch, making sure that for each node we
>>>>> specified the correct availability zone as the rack in
>>>>> the cassandra-rackdc.properties. In this way,
>>>>> the GossipingPropertyFileSnitch is equivalent to the EC2MultiRegionSnitch,
>>>>> so the data location does not change and no repair is needed afterwards.
>>>>> So, if your nodes are located in the us-east-1e AZ, your 
>>>>> cassandra-rackdc.properties
>>>>> should look like:
>>>>>
>>>>> dc=us-east
>>>>> rack=1e
>>>>>
>>>>> After this step is complete on all nodes, then you can add a new
>>>>> datacenter specifying different dc and rack on the
>>>>> cassandra-rackdc.properties of the new DC. Make sure you upgrade your
>>>>> initial datacenter to 1.2 before adding a new datacenter with vnodes
>>>>> enabled (of course).
>>>>>
>>>>> Cheers
>>>>>
>>>>>
>>>>> On Sun, Feb 2, 2014 at 6:37 AM, Katriel Traum <katr...@google.com>wrote:
>>>>>
>>>>>> Hello list.
>>>>>>
>>>>>> I'm upgrading a 1.1 cassandra cluster to 1.2(.13).
>>>>>> I've read here and in other places that the best way to migrate to
>>>>>> vnodes is to add a new DC, with the same amount of nodes, and run rebuild
>>>>>> on each of them.
>>>>>> However, I'm faced with the fact that I'm using EC2MultiRegion
>>>>>> snitch, which automagically creates the DC and RACK.
>>>>>>
>>>>>> Any ideas how I can go about adding a new DC with this kind of setup?
>>>>>> I need these new machines to be in the same EC2 Region as the current 
>>>>>> ones,
>>>>>> so adding to a new Region is not an option.
>>>>>>
>>>>>> TIA,
>>>>>> Katriel
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Paulo Motta*
>>>>>
>>>>> Chaordic | *Platform*
>>>>> *www.chaordic.com.br <http://www.chaordic.com.br/>*
>>>>> +55 48 3232.3200
>>>>> +55 83 9690-1314
>>>>>
>>>>
>>>>
>>>
>>

Reply via email to