Hi all,

We have a table in Cassandra where we frequently overwrite recent inserts.
Compaction does a fine job with this but ultimately larger memtables would
reduce compactions.

The question is: can we make Cassandra use larger memtables and flush less
frequently? What currently triggers the flushes? Opscenter shows them
flushing consistently at about 110MB in size, we have plenty of memory to
go larger.

According to
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_memtable_thruput_c.html
we can up the commit log space threshold, but this does not help, there is
plenty of runway there.

Theoretically sstable_size_in_mb could be causing it to flush (it's at the
default 160MB)... though we are flushing well before we hit 160MB. I have
not tried changing this but we don't necessarily want all the sstables to
be large anyway,

Thanks,
-dan

Reply via email to