280 sec: 865658 operations; 2661.5 current ops/sec; [INSERT AverageLatency(us)=3640.16] 290 sec: 865658 operations; 0 current ops/sec;
It also may indicate that C* trying to finished active tasks and your write requests have been in the queue all 10 sec. Try to monitor C* doing*$watch nodetool tpstats* and*$watch nodetool compactionstats. *Any values >0 in* pending*column isn't good. Enable GC logging in cassandra-env.sh. How much memory is free when running C* ? Increasing heap size may cause long GC delays since GC need to collect and copy memory and it also may depend on your CPU resources. Try to run C* on default settings and monitor it to found out the bottleneck. On 03/26/2014 05:54 PM, Jiaan Zeng wrote:
Hi, I am doing some performance benchmarks in a *single* node cassandra 1.2.4. BTW, the machine is dedicated to run one cassandra instance. The workload is 100% write. The throughput varies dramatically and sometimes even drops to 0. I have tried several things below and still got the same observation. There is no errors in the log file. One thing I spotted in the log is GCInspector reports GC takes more than 200 ms. I think that is because the size of the memtable setting. If I lower the memtable size, that kind of report can go away. Any clues about what is happening in this case and suggestions about how to achieve a stable write throughput? Thanks a lot. 1) Increase heap size from 4 G to 8 G. The total memory is 16 G. 2) Increase "memtable_total_space_in_mb" and "commitlog_total_space_in_mb" to decrease the number of memtable flush. 3) Disable the compaction to eliminate the impact of compaction on disk. Below is an example of throughput. 280 sec: 865658 operations; 2661.5 current ops/sec; [INSERT AverageLatency(us)=3640.16] 290 sec: 865658 operations; 0 current ops/sec; 300 sec: 903204 operations; 3754.22 current ops/sec; [INSERT AverageLatency(us)=12341.77]