"It is quite possible that this is expected, major version upgrades
semi-frequently spam logs with non-pathological error messages."
The exception is while trying to deserialize the endpoints in the remote
DC. Due to this error, the mutation will not be applied to any node in the
remote DC.



On Mon, Sep 23, 2013 at 10:29 AM, Robert Coli <rc...@eventbrite.com> wrote:

> On Sun, Sep 22, 2013 at 7:02 PM, Shashilpi Krishan <
> shashilpi.kris...@wizecommerce.com> wrote:
>
>>  We had a Cassandra cluster (running with v1.0.7) spread across 3 data
>> centers with each data center having 16 nodes. We started upgrading that to
>> 2.0 but realized that we can’t go directly to 2.0 due to read failures,
>> hence to avoid down time we have to go from 1.0 à 1.1 à 1.2 à 2.0.
>>
>
>  In general you should not run a Cassandra version X.Y.Z in production
> where Z < 5. Although I notice down thread that this cluster is not serving
> a critical business function... :)
>
> As you have discovered, you also should not generally try to upgrade
> across more than one major version.
>
>
>> Now problem is while upgrading from 1.2 à 2.0 we saw below errors
>> flooding the system.log files in one data center only until we upgraded all
>> the nodes in every DC to 2.0. We think that moment last node was upgraded
>> then this error was gone. Does anyone has idea what could have been causing
>> this? Is it because of some version incompatibility?
>>
>
> I would probably file a JIRA with relevant details/log snippets. It is
> quite possible that this is expected, major version upgrades
> semi-frequently spam logs with non-pathological error messages.
>
> =Rob
>

Reply via email to