From
https://docs.datastax.com/en/cassandra/3.0/cassandra/dml/dmlAboutDeletes.html
> Cassandra allows you to set a default_time_to_live property for an entire
table. Columns and rows marked with regular TTLs are processed as described
above; but when a record exceeds the table-level TTL, **Cassand
I'm using LCS and a relatively large TTL of 2 years for all inserted rows
and I'm concerned about the moment at wich C* would drop the corresponding
tombstones (neither explicit deletes nor updates are being performed).
>From [Missing Manual for Leveled Compaction Strategy](
https://www.youtube.co
>> scenario to using TTL per insert?
>>
>
> Yes, exactly this,
>
> C*heers.
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickl
s good if you can rotate the partitions over time,
> not to reuse old partitions for example.
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://ww
After a bulk load of writes to existing partition keys (with a higher
timestamp), I wanted to free disk space, suspecting that rows will be in
the highest levels and it would take a time until they were compacted.
I've started a major compaction, and the disk usage went from ~30% to ~40%
(as expect
Hi, I'm trying to test if adding driver compression will bring me any
benefit.
I understand that the trade-off is less bandwidth but increased CPU usage
in both cassandra nodes (compression) and client nodes (decompression) but
I want to know what are the key metrics and how to monitor them to prob
rs).
>
> Jon
>
>
>
>
> On Mon, Apr 8, 2019 at 7:26 AM Gabriel Giussi
> wrote:
> >
> > Hi, I'm trying to test if adding driver compression will bring me any
> benefit.
> > I understand that the trade-off is less bandwidth but increased CPU
> usage i
I've found a huge partion (~9GB) in my cassandra cluster because I'm
loosing 3 nodes recurrently due to OutOfMemoryError
> ERROR [SharedPool-Worker-12] 2019-08-12 11:07:45,735
> JVMStabilityInspector.java:140 - JVM state determined to be unstable.
> Exiting forcefully due to:
> java.lang.OutOfMemo
I know cassandra uses consistent hashing for choosing the node where a key
should go to, and if I understand correctly from this image
https://cassandra.apache.org/doc/latest/cassandra/_images/ring.svg
if the replication factor is 3 it just picks the other two nodes following
the ring clockwise.
I