Thank you both!

Robert, I read through the bug--it sounds like this behavior has been fixed
(or the impact reduced) in 2.1, but given that our data is pretty uniform
(with no overlap between rows/values), it doesn't look like we'll suffer
from that.  At least, that's what I understood from the bug.

I was actually discouraged from using the leveled compaction because, in
general, write-heavy workloads yield poorer performance, I/O wise.
However, since these machines have very little reads, I don't think some
delay will hurt.

Is it possible to seamlessly migrate from SizeTiered to Leveled Compaction?


On Thu, Jul 10, 2014 at 6:56 AM, Pavel Kogan <pavel.ko...@cortica.com>
wrote:

> Moving to Leveled compaction resolved same problem for us. As Robert
> mentioned, use it carefully.
> Size tiered compaction requires having 50% free disk space (also according
> to datastax documentation).
>
> Pavel
>
>
> On Wed, Jul 9, 2014 at 8:39 PM, Robert Coli <rc...@eventbrite.com> wrote:
>
>> On Wed, Jul 9, 2014 at 4:27 PM, Andrew <redmu...@gmail.com> wrote:
>>
>>>  What kind of overhead should I expect for compaction, in terms of
>>> size?  In this use case, the primary use for compaction is more or less to
>>> clean up tombstones for expired TTLs.
>>>
>>
>> Compaction can result in output files >100% of the input, if compression
>> is used and the input SSTables are also compressed. If you use size tiered
>> compaction (STS), you therefore must have enough headroom to compact your
>> largest [n] SSTables together successfully.
>>
>> Level compaction (LCS) has a different, significantly lower, amount of
>> headroom.
>>
>> If you are making heavy use of TTL, you should be careful about using LCS
>> in certain cases, read :
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-6654 - "Droppable
>> tombstones are not being removed from LCS table despite being above 20%"
>>
>> =Rob
>>
>>
>

Reply via email to