Well, it turns out the Read-Request Latency graph in Ops-Center is
highly misleading.
Using jconsole, the read-latency for the column family in question is
actually normally around 800 microseconds, punctuated by occasional big
spikes that drive up the averages.
Towards the end of the batch process, the Opscenter reported average
latency is up above 4000 microsecs, and forced compactions no longer
help drive the latency down again.
I'm going to stop relying on OpsCenter for data for performance analysis
metrics, it just doesn't have the resolution.
The only things left on my list for investigation are memtable sizes /
eviction and JNA - and trying to capture some of the requests that are
causing the spikes for further investigation.
James M
On 31/12/12 10:05, James Masson wrote:
Hi Yiming,
I've had the chance to observe what happens to cassandra read response
time over time.
It starts out with fast 1ms reads, until the first compaction starts,
then the CPUs are maxed out for a period, and read latency rises to 4ms.
After compaction finishes, the system returns to 1ms reads and low cpu use.
This cycle repeats a few more times, but eventually, compactions become
more and more infrequent and read-latency is stuck at 4ms for the rest
of the batch operation.
I understand why compaction occurs, but not why it takes so long for our
dataset, or why it eventually seems to not return to the original
performance levels.
Our dataset just about fits in each node's disk-cache. Doing compaction
should be a matter of memory and CPU bandwidth, bottlenecked by disk
writes. I see near zero disk I/O, and the SAN is capable of sustained
100Mb/s writes easily.
I'm using a fairly stock cassandra config.
tempted to just set this to unlimited.
# Throttles compaction to the given total throughput across the entire
# system. The faster you insert data, the faster you need to compact in
# order to keep the sstable count down, but in general, setting this to
# 16 to 32 times the rate you are inserting data is more than sufficient.
# Setting this to 0 disables throttling. Note that this account for all
types
# of compaction, including validation compaction.
compaction_throughput_mb_per_sec: 16
About the only thing I have changed is this:
# For workloads with more data than can fit in memory, Cassandra's
# bottleneck will be reads that need to fetch data from
# disk. "concurrent_reads" should be set to (16 * number_of_drives) in
# order to allow the operations to enqueue low enough in the stack
# that the OS and drives can reorder them.
#
# On the other hand, since writes are almost never IO bound, the ideal
# number of "concurrent_writes" is dependent on the number of cores in
# your system; (8 * number_of_cores) is a good rule of thumb.
concurrent_reads: 128
concurrent_writes: 32
On 28/12/12 14:02, Yiming Sun wrote:
Is there any chance to increase the VM configuration specs? I couldn't
pinpoint in exactly which message you mentioned the VMs are 2GB mem and
2 cores, which is a bit meager.
The data-set pretty much all fits in RAM, and using 4Ghz of CPU time to
serve about 500 key-value pairs per second is pretty poor performance
compared to Cassandra's competitors, no? I'd rather understand why
performance is bad, rather than throw hardware into a black hole!
Also is it possible to batch the writes together?
I'll ask.
thanks for persevering!
James M