Re: read one -- internal behavior

2014-03-08 Thread graham sanderson
Note that article pretty much covers it all; the nice thing about rapid-read protection is that the dynamic snitch works on a per node statistics level to pick which node(s) (in this case one), so a single poorly performing table (perhaps corrupted SSTables on that node causing no responses and

Re: read one -- internal behavior

2014-03-07 Thread Michael Shuler
Actually, this is tunable in >= 2.0.2 ;) http://www.datastax.com/dev/blog/rapid-read-protection-in-cassandra-2-0-2 Michael On 03/07/2014 07:33 PM, Jonathan Lacefield wrote: Yikes my apologies. B is not the answer On Mar 7, 2014, at 8:24 PM, Russell Hatch mailto:rha...@datastax.com>> wrote:

Re: read one -- internal behavior

2014-03-07 Thread Jonathan Lacefield
Yikes my apologies. B is not the answer On Mar 7, 2014, at 8:24 PM, Russell Hatch wrote: If you are using cqlsh, you can get a look at what's happening behind the scenes by enabling tracing with 'tracing on;' before executing a query. In this scenario you'll see 'Sending message to [ip address]

Re: read one -- internal behavior

2014-03-07 Thread Russell Hatch
If you are using cqlsh, you can get a look at what's happening behind the scenes by enabling tracing with 'tracing on;' before executing a query. In this scenario you'll see 'Sending message to [ip address]' for each of the replicas. On Fri, Mar 7, 2014 at 5:44 PM, Jonathan Lacefield wrote: > B

Re: read one -- internal behavior

2014-03-07 Thread Jonathan Lacefield
B is the answer > On Mar 7, 2014, at 7:35 PM, James Lyons wrote: > > I'm wondering about the following scenario. > > Consider a cluster of nodes with replication say 3. > When performing a read at "read one" consistency and lets say my client isn't > smart enough to route the request to the Cass

read one -- internal behavior

2014-03-07 Thread James Lyons
I'm wondering about the following scenario. Consider a cluster of nodes with replication say 3. When performing a read at "read one" consistency and lets say my client isn't smart enough to route the request to the Cassandra node housing the data at first. the contacted node acts as a coordinator