Re: Row Mutation Errors while upgrading to Cassandra2.0

2013-09-23 Thread sankalp kohli
"It is quite possible that this is expected, major version upgrades semi-frequently spam logs with non-pathological error messages." The exception is while trying to deserialize the endpoints in the remote DC. Due to this error, the mutation will not be applied to any node in the remote DC. On M

Re: Row Mutation Errors while upgrading to Cassandra2.0

2013-09-23 Thread Robert Coli
On Sun, Sep 22, 2013 at 7:02 PM, Shashilpi Krishan < shashilpi.kris...@wizecommerce.com> wrote: > We had a Cassandra cluster (running with v1.0.7) spread across 3 data > centers with each data center having 16 nodes. We started upgrading that to > 2.0 but realized that we can’t go directly to 2.0

Re: Row Mutation Errors while upgrading to Cassandra2.0

2013-09-23 Thread sankalp kohli
ishan* > > > > *From:* sankalp kohli [mailto:kohlisank...@gmail.com] > *Sent:* Monday, September 23, 2013 8:01 AM > *To:* user@cassandra.apache.org > *Subject:* Re: Row Mutation Errors while upgrading to Cassandra2.0 > > > > You are upgrading to 2.0 in Prod? What is

RE: Row Mutation Errors while upgrading to Cassandra2.0

2013-09-23 Thread Shashilpi Krishan
eason. Thanks & Regards Shashilpi Krishan From: sankalp kohli [mailto:kohlisank...@gmail.com] Sent: Monday, September 23, 2013 8:01 AM To: user@cassandra.apache.org Subject: Re: Row Mutation Errors while upgrading to Cassandra2.0 You are upgrading to 2.0 in Prod? What is the urgency? On Su

Re: Row Mutation Errors while upgrading to Cassandra2.0

2013-09-22 Thread sankalp kohli
You are upgrading to 2.0 in Prod? What is the urgency? On Sun, Sep 22, 2013 at 7:02 PM, Shashilpi Krishan < shashilpi.kris...@wizecommerce.com> wrote: > Hi Everyone. > > > > We had a Cassandra cluster (running with v1.0.7) spread across 3 data > centers with each data center having 16 nodes. We

Row Mutation Errors while upgrading to Cassandra2.0

2013-09-22 Thread Shashilpi Krishan
Hi Everyone. We had a Cassandra cluster (running with v1.0.7) spread across 3 data centers with each data center having 16 nodes. We started upgrading that to 2.0 but realized that we can't go directly to 2.0 due to read failures, hence to avoid down time we have to go from 1.0 --> 1.1 --> 1.2