> However Ben Black suggests here that the cleanup will actually only
> impact data deleted through the API:
>
> http://comments.gmane.org/gmane.comp.db.cassandra.user/4437
>
> In this case, I guess that we need not worry too much about the
> setting since we are actually updating, never deleting. Is this the
> case?

Yes, that's correct. GCGraceSeconds affects the lifetime of
tombstones, which are needed only when deleting data. SImple
overwrites do not involve tombstones, and GCGraceSeconds is not in
play. Overwritten columns are eliminated when their sstables are
compacted.

> *Replication factor*
>
> Our use case is many more writes than reads, but when we do have reads
> they're random (we're not currently using hadoop to read entire CFs).
> I'm wondering what sort of level of RF to have for a cluster. We
> currently have 12 nodes and RF=4.
>
> To improve read performance I'm thinking of upping the number of nodes
> and keeping RF at 4.

In the absence of other bottlenecks, this makes sense yes.

Another thing to consider is whether to turn off (if on 0.6) or adjust
the frequency of (in 0.7) read-repair. If read repair is turned on
(0.6) or at 100% (0.7), each read will hit RF numbers of nodes (even
if you are reading at a low consistency level, with read repair, other
nodes are still asked to read the data and send back a checksum). If
you expect to be I/O bound to due low locality of access (the random
reads), this could potentially yield up to a factor of RF (in your
case 4) expected read throughput.

Whether or not turning off or decreasing read repair is acceptable is
of course up to your situation; and in particular if you read at e.q.
QUOROM you will still read from 3 (in the case of RF=4) nodes
regardless of read repair settings.

> My understanding is that this means we're sharing
> the data around more.

Not sure what you mean. Given a constant RF of 4, you will still have
4 copies, but they will be distributed across additional machines,
meaning each machine has less data and presumably gets less requests.

> However it also means a client read to a random
> node has less chance of actually connecting to one of the nodes with
> the data on.

Keep in mind though that hitting the right node is somewhat of a
special case, and the overhead is limited to whatever the cost of RPC
is. If you are expecting to bottleneck on disk seeks (judging again by
your random read comment), I would say you can completely ignore this.
When I say it's a special case, I mean that you're adding between 0
and 1 units of RPC overhead (on average); no matter how large your
cluster is, your RPC overhead is won't exceed 1, with 1 being whatever
the cost is to forward a request+response.

> On a similar note (read perf), I'm guessing that reading at weak
> consistency level will bring gains. Gleamed from this slide amongst
> other places:
>
> http://www.slideshare.net/mobile/benjaminblack/introduction-to-cassandra-replication-and-consistency#13
>
> Is this true, or will read repair still hammer disks in all the
> machines with the data on? Again I guess it's better to have low RF so
> there are less copied of the data to inspect when doing read repair.
> Will this result in better read performance?

Sorry, I did the impolite thing and began responding before having
read your entire E-Mail ;)

So yes, a low RF would increase read performance, but assuming you
care about data redundancy the better way to achieve that effect is
probably to decrease or disable read repair.

-- 
/ Peter Schuller

Reply via email to