...., due to a really intensive delete workloads, the SSTable is promoted
to t......

Is cassandra design for *delete* workloads? doubt so. Perhaps looking at
some other alternative like ttl?

jason

On Mon, May 25, 2015 at 10:12 AM, Manoj Khangaonkar <khangaon...@gmail.com>
wrote:

> Hi,
>
> For a delete intensive workload ( translate to write intensive), is there
> any reason to use leveled compaction ? The recommendation seems to be that
> leveled compaction is suited for read intensive workloads.
>
> Depending on your use case, you might better of with data tiered or size
> tiered strategy.
>
> regards
>
> regards
>
> On Sun, May 24, 2015 at 10:50 AM, Stefano Ortolani <ostef...@gmail.com>
> wrote:
>
>> Hi all,
>>
>> I have a question re leveled compaction strategy that has been bugging me
>> quite a lot lately. Based on what I understood, a compaction takes place
>> when the SSTable gets to a specific size (10 times the size of its previous
>> generation). My question is about an edge case where, due to a really
>> intensive delete workloads, the SSTable is promoted to the next level (say
>> L1) and its size, because of the many evicted tombstones, fall back to 1/10
>> of its size (hence to a size compatible to the previous generation, L0).
>>
>> What happens in this case? If the next major compaction is set to happen
>> when the SSTable is promoted to L2, well, that might take too long and too
>> many tobmstones could then appear in the meanwhile (and queries might
>> subsequently fail). Wouldn't be more correct to flag the SStable's
>> generation to its previous value (namely, not changing it even if a major
>> compaction took place)?
>>
>> Regards,
>> Stefano Ortolani
>>
>
>
>
> --
> http://khangaonkar.blogspot.com/
>

Reply via email to