Following up on this, I was hoping to get everyone's take on my use case
for Cassandra and see if everyone agrees it can meet the requirements:

I have a very tight SLA around get times.  These are almost always single
row fetches for 20-50 columns on a row that is likely under 200 columns.
The request must return in under 1 ms (this includes any latency between
the machines though they are in the same rack with at least 1 Gbs NIC).
My configuration currently is 3 nodes with RF: 2 with each node having
dual quad cores, 48 Gbs of RAM, and SSD for the data.  The data set will
certainly exceed the RAM size but will always fit in SSD and I see a key
cache hit rate currently of around 95%.  Assuming I can grow out nodes
linearly, is it reasonable to assume I can achieve sub 1 ms fetch times?
Note that the system will be processing increments which will be part of
the fetches and the write to read ratio will be around 2 to 1 (however I
programmatically batch increments which flush as a single command every
second to reduce load on the cluster).  I am using Astyanax 1.3 with
Cassandra 1.1.7.  I have definitely seen decreases in performance when
compactions are running but was hoping Astayanx's latency load balancing
would mitigate this assuming compactions do not happen on multiple nodes
simultaneously.

Thanks!

On 12/31/12 12:24 PM, "James Masson" <james.mas...@opigram.com> wrote:

>
>Well, it turns out the Read-Request Latency graph in Ops-Center is
>highly misleading.
>
>Using jconsole, the read-latency for the column family in question is
>actually normally around 800 microseconds, punctuated by occasional big
>spikes that drive up the averages.
>
>Towards the end of the batch process, the Opscenter reported average
>latency is up above 4000 microsecs, and forced compactions no longer
>help drive the latency down again.
>
>I'm going to stop relying on OpsCenter for data for performance analysis
>metrics, it just doesn't have the resolution.
>
>The only things left on my list for investigation are memtable sizes /
>eviction and JNA - and trying to capture some of the requests that are
>causing the spikes for further investigation.
>
>James M
>
>
>On 31/12/12 10:05, James Masson wrote:
>>
>> Hi Yiming,
>>
>> I've had the chance to observe what happens to cassandra read response
>> time over time.
>>
>> It starts out with fast 1ms reads, until the first compaction starts,
>> then the CPUs are maxed out for a period, and read latency rises to 4ms.
>> After compaction finishes, the system returns to 1ms reads and low cpu
>>use.
>>
>> This cycle repeats a few more times, but eventually, compactions become
>> more and more infrequent and read-latency is stuck at 4ms for the rest
>> of the batch operation.
>>
>> I understand why compaction occurs, but not why it takes so long for our
>> dataset, or why it eventually seems to not return to the original
>> performance levels.
>>
>> Our dataset just about fits in each node's disk-cache. Doing compaction
>> should be a matter of memory and CPU bandwidth, bottlenecked by disk
>> writes. I see near zero disk I/O, and the SAN is capable of sustained
>> 100Mb/s writes easily.
>>
>> I'm using a fairly stock cassandra config.
>>
>> tempted to just set this to unlimited.
>>
>> # Throttles compaction to the given total throughput across the entire
>> # system. The faster you insert data, the faster you need to compact in
>> # order to keep the sstable count down, but in general, setting this to
>> # 16 to 32 times the rate you are inserting data is more than
>>sufficient.
>> # Setting this to 0 disables throttling. Note that this account for all
>> types
>> # of compaction, including validation compaction.
>> compaction_throughput_mb_per_sec: 16
>>
>> About the only thing I have changed is this:
>>
>> # For workloads with more data than can fit in memory, Cassandra's
>> # bottleneck will be reads that need to fetch data from
>> # disk. "concurrent_reads" should be set to (16 * number_of_drives) in
>> # order to allow the operations to enqueue low enough in the stack
>> # that the OS and drives can reorder them.
>> #
>> # On the other hand, since writes are almost never IO bound, the ideal
>> # number of "concurrent_writes" is dependent on the number of cores in
>> # your system; (8 * number_of_cores) is a good rule of thumb.
>> concurrent_reads: 128
>> concurrent_writes: 32
>>
>>
>> On 28/12/12 14:02, Yiming Sun wrote:
>>
>>> Is there any chance to increase the VM configuration specs? I couldn't
>>> pinpoint in exactly which message you mentioned the VMs are 2GB mem and
>>> 2 cores, which is a bit meager.
>>
>> The data-set pretty much all fits in RAM, and using 4Ghz of CPU time to
>> serve about 500 key-value pairs per second is pretty poor performance
>> compared to Cassandra's competitors, no? I'd rather understand why
>> performance is bad, rather than throw hardware into a black hole!
>>
>>> Also is it possible to batch the writes together?
>>>
>>
>> I'll ask.
>>
>> thanks for persevering!
>>
>> James M

Reply via email to