Hi all,

Thanks for your answers! Yes, I agree that a delete intensive workload is not 
something Cassandra is designed for.

Unfortunately this is to cope with some unexpected data transformations that I 
hope are a temporary thing.

We chose LCS strategy because of really wide rows which were spanning several 
SStables with other compaction strategies (and hence leading to high latency 
read queries).

I was honestly thinking of scraping and rebuilding the SStable from scratch if 
this workload is confirmed to be temporary. Knowing the answer to my question 
above would help second guess my a decision a bit less :)

Cheers,
Stefano

> On Mon, May 25, 2015 at 9:52 AM, Jason Wee <peich...@gmail.com> wrote:
> ...., due to a really intensive delete workloads, the SSTable is promoted to 
> t......
> 
> Is cassandra design for *delete* workloads? doubt so. Perhaps looking at some 
> other alternative like ttl?
> 
> jason
> 
>> On Mon, May 25, 2015 at 10:12 AM, Manoj Khangaonkar <khangaon...@gmail.com> 
>> wrote:
>> Hi,
>> 
>> For a delete intensive workload ( translate to write intensive), is there 
>> any reason to use leveled compaction ? The recommendation seems to be that 
>> leveled compaction is suited for read intensive workloads.
>> 
>> Depending on your use case, you might better of with data tiered or size 
>> tiered strategy.
>> 
>> regards
>> 
>> regards
>> 
>>> On Sun, May 24, 2015 at 10:50 AM, Stefano Ortolani <ostef...@gmail.com> 
>>> wrote:
>>> Hi all,
>>> 
>>> I have a question re leveled compaction strategy that has been bugging me 
>>> quite a lot lately. Based on what I understood, a compaction takes place 
>>> when the SSTable gets to a specific size (10 times the size of its previous 
>>> generation). My question is about an edge case where, due to a really 
>>> intensive delete workloads, the SSTable is promoted to the next level (say 
>>> L1) and its size, because of the many evicted tombstones, fall back to 1/10 
>>> of its size (hence to a size compatible to the previous generation, L0). 
>>> 
>>> What happens in this case? If the next major compaction is set to happen 
>>> when the SSTable is promoted to L2, well, that might take too long and too 
>>> many tobmstones could then appear in the meanwhile (and queries might 
>>> subsequently fail). Wouldn't be more correct to flag the SStable's 
>>> generation to its previous value (namely, not changing it even if a major 
>>> compaction took place)?
>>> 
>>> Regards,
>>> Stefano Ortolani
>> 
>> 
>> 
>> -- 
>> http://khangaonkar.blogspot.com/

Reply via email to