Thanks, double checked; reads are CL.ONE.

On 10/10/2013 11:15 AM, J. Ryan Earl wrote:
Are you doing QUORUM reads instead of LOCAL_QUORUM reads?


On Wed, Oct 9, 2013 at 7:41 PM, Chris Burroughs
<chris.burrou...@gmail.com>wrote:

I have not been able to do the test with the 2nd cluster, but have been
given a disturbing data point.  We had a disk slowly fail causing a
significant performance degradation that was only resolved when the "sick"
node was killed.
  * Perf in DC w/ sick disk: 
http://i.imgur.com/W1I5ymL.**png?1<http://i.imgur.com/W1I5ymL.png?1>
  * perf in other DC: 
http://i.imgur.com/gEMrLyF.**png?1<http://i.imgur.com/gEMrLyF.png?1>

Not only was a single slow node able to cause an order of magnitude
performance hit in a dc, but the other dc faired *worse*.


On 09/18/2013 08:50 AM, Chris Burroughs wrote:

On 09/17/2013 04:44 PM, Robert Coli wrote:

On Thu, Sep 5, 2013 at 6:14 AM, Chris Burroughs
<chris.burrou...@gmail.com>**wrote:

  We have a 2 DC cluster running cassandra 1.2.9.  They are in actual
physically separate DCs on opposite coasts of the US, not just logical
ones.  The primary use of this cluster is CL.ONE reads out of a single
column family.  My expectation was that in such a scenario restarts
would
have minimal impact in the DC where the restart occurred, and no
impact in
the remote DC.

We are seeing instead that restarts in one DC have a dramatic impact on
performance in the other (let's call them DCs "A" and "B").


Did you end up filing a JIRA on this, or some other outcome?

=Rob



No.  I am currently in the process of taking a 2nd cluster from being
single to dual DC.  Once that is done I was going to repeat the test
with each cluster and gather as much information as reasonable.





Reply via email to