I see, thanks Jason!
Can a dev confirm it is safe to apply those changes on live data? Also, if
I understood correctly, those parameters still obey the gc_grace_seconds,
that is, no compaction to evict tombstones will take place before
gc_grace_seconds elapsed, correct?
Cheers,
Stefano
On Tue, M
Hi Stefano,
I did a quick test, it looks almost instant if you do alter but remember,
in my test machine, there are no loaded data yet and switching from stcs to
lcs.
cqlsh:jw_schema1> CREATE TABLE DogTypes ( block_id uuid, species text,
alias text, population varint, PRIMARY KEY (block_id) ) WIT
Ok, I am reading a bit more about compaction subproperties here (
http://docs.datastax.com/en/cql/3.1/cql/cql_reference/compactSubprop.html)
and it seems that tombstone_threshold and unchecked_tombstone_compaction
might come handy.
Does anybody know if changing any of these values (via ALTER) is p
Hi all,
Thanks for your answers! Yes, I agree that a delete intensive workload is not
something Cassandra is designed for.
Unfortunately this is to cope with some unexpected data transformations that I
hope are a temporary thing.
We chose LCS strategy because of really wide rows which were spa
, due to a really intensive delete workloads, the SSTable is promoted
to t..
Is cassandra design for *delete* workloads? doubt so. Perhaps looking at
some other alternative like ttl?
jason
On Mon, May 25, 2015 at 10:12 AM, Manoj Khangaonkar
wrote:
> Hi,
>
> For a delete intensive workl
Hi,
For a delete intensive workload ( translate to write intensive), is there
any reason to use leveled compaction ? The recommendation seems to be that
leveled compaction is suited for read intensive workloads.
Depending on your use case, you might better of with data tiered or size
tiered strat