Note that article pretty much covers it all; the nice thing about rapid-read protection is that the dynamic snitch works on a per node statistics level to pick which node(s) (in this case one), so a single poorly performing table (perhaps corrupted SSTables on that node causing no responses and timeouts) can have no significant effect on the dynamic snitch, but the rapid-read protection can cause a much more proactive speculative read to another node that will satisfy the request long before the other node times out (though frankly in practice we keep timeouts pretty low, since we have multiple levels of fallback retry and would much rather do that quickly than wait and hope). On Mar 7, 2014, at 8:12 PM, Michael Shuler <mich...@pbandjelly.org> wrote:
> Actually, this is tunable in >= 2.0.2 ;) > > http://www.datastax.com/dev/blog/rapid-read-protection-in-cassandra-2-0-2 > > Michael > > On 03/07/2014 07:33 PM, Jonathan Lacefield wrote: >> Yikes my apologies. B is not the answer >> >> On Mar 7, 2014, at 8:24 PM, Russell Hatch <rha...@datastax.com >> <mailto:rha...@datastax.com>> wrote: >> >>> If you are using cqlsh, you can get a look at what's happening behind >>> the scenes by enabling tracing with 'tracing on;' before executing a >>> query. In this scenario you'll see 'Sending message to [ip address]' >>> for each of the replicas. >>> >>> >>> On Fri, Mar 7, 2014 at 5:44 PM, Jonathan Lacefield >>> <jlacefi...@datastax.com <mailto:jlacefi...@datastax.com>> wrote: >>> >>> B is the answer >>> >>> > On Mar 7, 2014, at 7:35 PM, James Lyons <james.ly...@gmail.com >>> <mailto:james.ly...@gmail.com>> wrote: >>> > >>> > I'm wondering about the following scenario. >>> > >>> > Consider a cluster of nodes with replication say 3. >>> > When performing a read at "read one" consistency and lets say my >>> client isn't smart enough to route the request to the Cassandra >>> node housing the data at first. the contacted node acts as a >>> coordinator and forwards the request to: >>> > A) a node that houses the data and waits for a reply, possibly >>> timesout and re-issues to another in a failure or slow host scenario. >>> > or >>> > B) all (3) the nodes that house the data and returns after any >>> one of them replies. >>> > >>> > I'm hoping for B... anyone know for sure? >>> >>> >
smime.p7s
Description: S/MIME cryptographic signature