> All the read/write request are issued with CL local quorum, but still > there're a lot of inter-dc read request. > How are you measuring this ?
Cheers ----------------- Aaron Morton Cassandra Consultant New Zealand @aaronmorton http://www.thelastpickle.com On 22/07/2013, at 8:41 AM, sankalp kohli <kohlisank...@gmail.com> wrote: > Slice query does not trigger background read repair. > Implement Read Repair on Range Queries > > > On Sun, Jul 21, 2013 at 1:40 PM, sankalp kohli <kohlisank...@gmail.com> wrote: > There can be multiple reasons for that > 1) Background read repairs. > 2) Your data is not consistent and leading to read repairs. > 3) For writes, irrespective of the consistency used, a single write request > will goto other DC > 4) You might be running other nodetools commands like repair. > read_repair_chanceĀ¶ > > (Default: 0.1 or 1) Specifies the probability with which read repairs should > be invoked on non-quorum reads. The value must be between 0 and 1. For tables > created in versions of Cassandra before 1.0, it defaults to 1. For tables > created in versions of Cassandra 1.0 and higher, it defaults to 0.1. However, > for Cassandra 1.0, the default is 1.0 if you use CLI or any Thrift client, > such as Hector or pycassa, and is 0.1 if you use CQL. > > > > On Sun, Jul 21, 2013 at 10:26 AM, Omar Shibli <o...@eyeviewdigital.com> wrote: > One more thing, I'm doing a lot of key slice read requests, is that supposed > to change anything? > > > On Sun, Jul 21, 2013 at 8:21 PM, Omar Shibli <o...@eyeviewdigital.com> wrote: > I'm seeing a lot of inter-dc read requests, although I've followed DataStax > guidelines for multi-dc deployment > http://www.datastax.com/dev/blog/deploying-cassandra-across-multiple-data-centers > > Here is my setup: > 2 data centers within the same region (AWS) > Targeting DC, RP 3, 6 nodes > Analytic DC, RP 3, 11 nodes > > All the read/write request are issued with CL local quorum, but still > there're a lot of inter-dc read request. > Any suggestion, or am I missing something? > > Thanks in advance, > > >