Re: Multi-dc restart impact

2013-10-10 Thread Chris Burroughs
Thanks, double checked; reads are CL.ONE. On 10/10/2013 11:15 AM, J. Ryan Earl wrote: Are you doing QUORUM reads instead of LOCAL_QUORUM reads? On Wed, Oct 9, 2013 at 7:41 PM, Chris Burroughs wrote: I have not been able to do the test with the 2nd cluster, but have been given a disturbing da

Re: Multi-dc restart impact

2013-10-10 Thread J. Ryan Earl
Are you doing QUORUM reads instead of LOCAL_QUORUM reads? On Wed, Oct 9, 2013 at 7:41 PM, Chris Burroughs wrote: > I have not been able to do the test with the 2nd cluster, but have been > given a disturbing data point. We had a disk slowly fail causing a > significant performance degradation t

Re: Multi-dc restart impact

2013-10-09 Thread Chris Burroughs
I have not been able to do the test with the 2nd cluster, but have been given a disturbing data point. We had a disk slowly fail causing a significant performance degradation that was only resolved when the "sick" node was killed. * Perf in DC w/ sick disk: http://i.imgur.com/W1I5ymL.png?1 *

Re: Multi-dc restart impact

2013-09-18 Thread Chris Burroughs
On 09/17/2013 04:44 PM, Robert Coli wrote: On Thu, Sep 5, 2013 at 6:14 AM, Chris Burroughs wrote: We have a 2 DC cluster running cassandra 1.2.9. They are in actual physically separate DCs on opposite coasts of the US, not just logical ones. The primary use of this cluster is CL.ONE reads out

Re: Multi-dc restart impact

2013-09-17 Thread Robert Coli
On Thu, Sep 5, 2013 at 6:14 AM, Chris Burroughs wrote: > We have a 2 DC cluster running cassandra 1.2.9. They are in actual > physically separate DCs on opposite coasts of the US, not just logical > ones. The primary use of this cluster is CL.ONE reads out of a single > column family. My expect