Reads still need to satisfy quorum when you've specified quorum --
otherwise you have no consistency control.

Each read goes out to each node that has a replica of key (in your case
all) and then independently each node consults its row cache and either
returns cached data or has to go through the normal key cache and SST
tables to return the data.

Once the coordinator node has received quorum matching results, the read
returns.

To answer your question, I think you either need a larger cache (if
performance actually shows your response times as insufficient).  For
performance, you should determine if you really need quorum or whether you
could do CF=ONE.

Someone else more familiar with the row cache implementation may want to
override here, but I'd conjecture that with a 5-node cluster, RF=5, that
essentially the row caches should have the same keys across all nodes
(ignoring things like cluster restarts, dropped mutations, etc.).





On Thu, Oct 10, 2013 at 10:03 AM, Artur Kronenberg <
artur.kronenb...@openmarket.com> wrote:

>  Hi.
>
> That is basically our set up. We'll be holding all data on all nodes.
>
> My problem was more on how the cache would behave. I thought it might go
> this way:
>
> 1. No cache hit
>
> Read from 3 nodes to verify results are correct and then return. Write
> result into RowCache.
>
> 2. Cache hit
>
> Read from Cache directly and return.
>
> If now the value gets updated it would be found in the RowCache and either
> invalidated (hence case 1 on next read) or updated (hence case 2 on next
> read). However I couldn't find any information on this.
>
> If this was the case it would mean that each node would only have to hold
> 1/5 of my data in Cache (you're right about the DC clone so 1/5 of data
> instead of 1/10). If however 3 nodes have to be read each time and all 3
> fill up the row cache with the same data that would make my cache
> requirements bigger.
>
> Thanks!
>
> Artur
>
> On 10/10/13 14:06, Ken Hancock wrote:
>
>  If you're hitting 3/5 nodes, it sounds like you've set your replication
> factor to 5. Is that what you're doing so you can have a 2-node outtage?
>
>  For a 5-node cluster, RF=5, each node will have 100% of your data (a
> second DC is just a clone), so with a 3GB off-heap it means that 3GB /
> <total data size in GB> total would be cacheable in the row cache.
>
> On the other hand, if you're doing RF=3, each node will have 60% of your
> data instead of 100% so the effective percentage of rows that are cache
> goes up by 66%.
>
>  Great quick & dirty caclulator: http://www.ecyrd.com/cassandracalculator/
>
>
>
> On Thu, Oct 10, 2013 at 6:40 AM, Artur Kronenberg <
> artur.kronenb...@openmarket.com> wrote:
>
>>  I was reading through configuration tips for cassandra and decided to
>> use row-cache in order to optimize the read performance on my cluster.
>>
>> I have a cluster of 10 nodes, each of them opeartion with 3 GB off-heap
>> using cassandra 2.4.1. I am doing local quorum reads, which means that I
>> will hit 3 nodes out of 5 because I split my 10 nodes into two data-centres.
>>
>> I was under the impression that since each node gets a certain range of
>> reads my total amount of off-heap would be 10 * 3 GB = 30 GB. However is
>> this still correct with quorum reads? How does cassandra handle row-cache
>> hits in combination with quorum reads?
>>
>> Thanks!
>> -- artur
>>
>
>
>
> --
>   *Ken Hancock *| System Architect, Advanced Advertising
> SeaChange International
> 50 Nagog Park
> Acton, Massachusetts 01720
> ken.hanc...@schange.com | www.schange.com | 
> NASDAQ:SEAC<http://www.schange.com/en-US/Company/InvestorRelations.aspx>
>
> Office: +1 (978) 889-3329 | [image: Google Talk:] ken.hanc...@schange.com
>  | [image: Skype:]hancockks | [image: Yahoo IM:]hancockks [image:
> LinkedIn] <http://www.linkedin.com/in/kenhancock>
>
> [image: SeaChange International]
>  <http://www.schange.com/>  This e-mail and any attachments may contain
> information which is SeaChange International confidential. The information
> enclosed is intended only for the addressees herein and may not be copied
> or forwarded without permission from SeaChange International.
>
>
>


-- 
*Ken Hancock *| System Architect, Advanced Advertising
SeaChange International
50 Nagog Park
Acton, Massachusetts 01720
ken.hanc...@schange.com | www.schange.com |
NASDAQ:SEAC<http://www.schange.com/en-US/Company/InvestorRelations.aspx>

Office: +1 (978) 889-3329 | [image: Google Talk:]
ken.hanc...@schange.com | [image:
Skype:]hancockks | [image: Yahoo IM:]hancockks [image:
LinkedIn]<http://www.linkedin.com/in/kenhancock>

[image: SeaChange International]
 <http://www.schange.com/>This e-mail and any attachments may contain
information which is SeaChange International confidential. The information
enclosed is intended only for the addressees herein and may not be copied
or forwarded without permission from SeaChange International.

Reply via email to