What is the technical limitation that vnodes need murmer? That seems uncool
for long time users?

On Monday, December 30, 2013, Jean-Armel Luce <jaluc...@gmail.com> wrote:
> Hi,
>
> I don't know how your application works, but I explained during the last
Cassandra Summit Europe how we did the migration from relational database
to Cassandra without any interruption of service.
>
> You can have a look at the video C* Summit EU 2013: The Cassandra
Experience at Orange
>
> And use the mod-dup module https://github.com/Orange-OpenSource/mod_dup
>
> For copying data from your Cassandra cluster 1.1 to the Cassandra cluster
1.2, you can backup your data and then use sstableloader (in this case, you
will not have to modify the timestamp as I did for the migration from
relational to Cassandra).
>
> Hope that helps !!
>
> Jean Armel
>
>
>
> 2013/12/30 Tupshin Harper <tups...@tupshin.com>
>>
>> No.  This is not going to work.  The vnodes feature requires the murmur3
partitioner which was introduced with Cassandra 1.2.
>>
>> Since you are currently using 1.1, you must be using the random
partitioner, which is not compatible with vnodes.
>>
>> Because the partitioner determines the physical layout of all of your
data on disk and across the cluster, it is not possible to change
partitioner without taking some downtime to rewrite all of your data.
>>
>> You should probably plan on an upgrade to 1.2 but without also switching
to vnodes at this point.
>>
>> -Tupshin
>>
>> On Dec 30, 2013 9:46 AM, "Katriel Traum" <katr...@google.com> wrote:
>>>
>>> Hello list,
>>> I have a 2 DC set up with DC1:3, DC2:3 replication factor. DC1 has 6
nodes, DC2 has 3. This whole setup runs on AWS, running cassandra 1.1.
>>> Here's my nodetool ring:
>>> 1.1.1.1  eu-west     1a          Up     Normal  55.07 GB        50.00%
             0
>>> 2.2.2.1  us-east     1b          Up     Normal  107.82 GB       100.00%
            1
>>> 1.1.1.2  eu-west     1b          Up     Normal  53.98 GB        50.00%
             28356863910078205288614550619314017622
>>> 1.1.1.3  eu-west     1c          Up     Normal  54.85 GB        50.00%
             56713727820156410577229101238628035242
>>> 2.2.2.2  us-east     1d          Up     Normal  107.25 GB       100.00%
            56713727820156410577229101238628035243
>>> 1.1.1.4  eu-west     1a          Up     Normal  54.99 GB        50.00%
             85070591730234615865843651857942052863
>>> 1.1.1.5  eu-west     1b          Up     Normal  55.1 GB         50.00%
             113427455640312821154458202477256070484
>>> 2.2.2.3  us-east     1e          Up     Normal  106.78 GB       100.00%
            113427455640312821154458202477256070485
>>> 1.1.1.6  eu-west     1c          Up     Normal  55.01 GB        50.00%
             141784319550391026443072753096570088105
>>>
>>> I am going to upgrade my machine type, upgrade to 1.2 and change the
6-node to 3 nodes. I will have to do it on the live system.
>>> I'd appreciate any comments about my plan.
>>> 1. Decommission a 1.1 node.
>>> 2. Bootstrap a new one in-place, cassandra 1.2, vnodes enabled (I am
trying to avoid a re-balance later on).
>>> 3. When done, decommission nodes 4-6 at DC1
>>> Issues i've spotted:
>>> 1. I'm guessing I will have an unbalanced cluster for the time period
where I have 1.2+vnodes and 1.1 mixed.
>>> 2. Rollback is cumbersome, snapshots won't help here.
>>> Any feedback appreciated
>>> Katriel
>
>

-- 
Sorry this was sent from mobile. Will do less grammar and spell check than
usual.

Reply via email to