Hi it's my point as we have only one way to insert data it seems be cool
to set it by code to allow change it by configuration if need.
I use another way to clean my cluster : I down a node, delete bad
sstables manually, up & repair the node. And apply it node by node.
Now the nodes don't consumer more data that expected!
Thank you all dears for help
Le 06/10/2021 à 17:59, Jeff Jirsa a écrit :
I think this is a bit extreme. If you know that 100% of all queries that
write to the table include a TTL, not having a TTL on the table is just
fine. You just need to ensure that you always write correctly.
On Wed, Oct 6, 2021 at 8:57 AM Bowen Song <bo...@bso.ng
<mailto:bo...@bso.ng>> wrote:
TWCS without a table TTL is unlikely to work correctly, and adding the
table TTL retrospectively alone is also unlikely to fix the existing
issue. You may need to add the table default TTL and update all
existing
data to reflect the TTL change, and then trigger a major compaction to
update the SSTable files' metadata (specifically, maximum timestamp in
the SSTable and TTL max, which can be used to calculate the safe time
for deleting the entire SSTable file). After all the above is done, you
will need to wait for the table default TTL amount of time before
everything is back to normal. The reason for the waiting time is
because
the major compaction will result in a single SSTable file expiring in
the TTL time, and the SSTable will remain on disk until that amount of
time has passed. So you will need enough disk space for about twice the
amount of data you are expecting to have in that table.
On 06/10/2021 16:34, Michel Barret wrote:
> Hi, it's not set before. I set it to ensure all data have a ttl.
>
> Thanks for your help.
>
> Le 06/10/2021 à 13:47, Bowen Song a écrit :
>> What is the the table's default TTL? (Note: it may be different
than
>> the TTL of the data in the table)
>>
>> On 06/10/2021 09:42, Michel Barret wrote:
>>> Hello,
>>>
>>> I try to use cassandra (3.11.5) with 8 nodes (in single
datacenter).
>>> I use one simple table, all data are inserted with 31 days TTL
(the
>>> data are never updated).
>>>
>>> I use the TWCS strategy with:
>>> - 'compaction_window_size': '24'
>>> - 'compaction_window_unit': 'HOURS'
>>> - 'max_threshold': '32'
>>> - 'min_threshold': '4'
>>>
>>> Each node run one time by week a 'nodetool repair' and our
>>> gc_grace_seconds is set to 10 days.
>>>
>>> I track the storage of nodes and the partition used for cassandra
>>> data (only use for this) is consuming to ~40% after one month.
>>>
>>> But cassandra consume continuously more space, if I read the
>>> sstables with sstabledump I find very old tombstones like it :
>>>
>>> "liveness_info" : { "tstamp" : "2021-07-26T08:15:00.092897Z",
"ttl"
>>> : 2678400, "expires_at" : "2021-08-26T08:15:00Z", "expired" :
true }
>>>
>>> I don't understand why this tombstone isn't erased. I believe
that I
>>> apply all I found on internet without improvement.
>>>
>>> Anybody had a clue to fix my problem?
>>>
>>> Have a nice day