I see, thanks for the input. Compression is not enabled at the moment, but
I may try increasing that number regardless.

Also I don't think in-memory tables would work since the dataset is
actually quite large. The pattern is more like a given set of rows will
receive many overwriting updates and then not be touched for a while.

On Fri, Feb 27, 2015 at 2:27 PM, Robert Coli <rc...@eventbrite.com> wrote:

> On Fri, Feb 27, 2015 at 2:01 PM, Dan Kinder <dkin...@turnitin.com> wrote:
>
>> Theoretically sstable_size_in_mb could be causing it to flush (it's at
>> the default 160MB)... though we are flushing well before we hit 160MB. I
>> have not tried changing this but we don't necessarily want all the sstables
>> to be large anyway,
>>
>
> I've always wished that the log message told you *why* the SSTable was
> being flushed, which of the various bounds prompted the flush.
>
> In your case, the size on disk may be under 160MB because compression is
> enabled. I would start by increasing that size.
>
> Datastax DSE has in-memory tables for this use case.
>
> =Rob
>
>


-- 
Dan Kinder
Senior Software Engineer
Turnitin – www.turnitin.com
dkin...@turnitin.com

Reply via email to