Hi Lapo

Take a look at TWCS, I think that could help your use case: 
https://thelastpickle.com/blog/2016/12/08/TWCS-part1.html

Regards 

Paul Chandler

Sent from my iPhone

> On 29 Dec 2022, at 08:55, Lapo Luchini <l...@lapo.it> wrote:
> 
> Hi, I have a table which gets (a lot of) data that is written once and very 
> rarely read (it is used for data that is mandatory for regulatory reasons), 
> and almost never deleted.
> 
> I'm using the default SCTS as at the time I didn't know any better, but 
> SSTables size are getting huge, which is a problem because they both are 
> getting to the size of the available disk and both because I'm using a 
> snapshot-based system to backup the node (and thus compacting a huge SSTable 
> into an even bigger one generates a lot of traffic for mostly-old data).
> 
> I'm thinking about switching to LCS (mainly to solve the size issue), but I 
> read that it is "optimized for read heavy workloads […] not a good choice for 
> immutable time series data". Given that I don't really care about write nor 
> read speed, but would like SSTables size to have a upper limit, would this 
> strategy still be the best?
> 
> PS: Googling around a strategy called "incremental compaction" (ICS) keeps 
> getting in results, but that's only available in ScyllaDB, right?
> 
> -- 
> Lapo Luchini
> l...@lapo.it
> 

Reply via email to