Dear Wiliam,

I'm also sorry for my poor english !

Compression ratio 0.566… I guess it means If you have 1MB data on your laptop, 
you get 0.566MB in Cassandra.
If you put blobs in the table, it is ok!
If it is plain english text, It seams low.

I understand answer of below questions from Bowen:

•             snapshots could've been created automatically,
o             such as by dropping or truncating tables when auto_snapshots is 
set to true: SET TO TRUE
o             or compaction when snapshot_before_compaction is set to true : 
SET TO FALSE
•             backups, which could've been created automatically, e.g. when 
incremental_backups is set to true: SET TO FALSE
•             mixing repaired and unrepaired sstables, which is usually caused 
by incremental repairs, even if it had only been ran once : no incremental 
repairs done
•             partially upgraded cluster, e.g. mixed Cassandra version in the 
same cluster: Only 1 node un the cluster.
•             token ring change (e.g. adding or removing nodes) without 
"nodetool cleanup" : Only 1 node un the cluster.
•             changes made to the compression table properties: No change done

I don't have the answers of these questions below from Bowen:
•             do you have any snapshots/backup data in the folder ?
o             You can try to run : nodetool listsnapshots
•             actual increase in data size of business data ?
o             maybe one developper use prod env to inject data and didn't 
notice it ?

I recently started working in Cassandra administration a few months ago.
So I’m maybe wrong !

My team recently migrated Cassandra from version 3 to 4.1.4.
They didn't observe this behavior.
All tables use the Leveled Compaction Strategy, and we do not perform manual 
compactions.

What I can see is that droppable tombstones are below 5%, which is expected and 
should not significantly impact the data size.
Do we agree with this statement?
Have you recently deleted a large amount of data?
Could you run the following command on the 80GB+ SSTable and some others?
    $CASSANDRA_HOME/tools/bin/sstablemetadata


An SSTable of 80GB+ seems very large.
An SSTable ensures that you don't have duplicate data inside, meaning this 
80GB+ consists of useful data.

So my question is: How do you determine the data size before compaction?
I understand that you perform backups—could you check the backup archives from 
before the update?
Do these archives contain the full dataset before the update?

Additionally, try parsing the business data and verify if all the data is 
intact.
You can use the following command to inspect a few records:

$CASSANDRA_HOME/tools/bin/sstabledump -d



Another possibility is to load all the SSTables into another Cassandra instance 
using:

$CASSANDRA_HOME/bin/sstableloader

Then, check if you get the same data size.

Thank you,
Best regards,
Fabien


De : Tasmaniedemon <tasmaniede...@free.fr>
Envoyé : jeudi, 20 mars 2025 09:06
À : user@cassandra.apache.org
Objet : Re: Increased Disk Usage After Upgrading From Cassandra 3.x.x to 4.1.3


PRUDENCE. Ce message provient d'un expéditeur externe à l'Etat. Ne cliquez sur 
les liens ou n'ouvrez les pièces jointes que si vous faites entière confiance à 
cet expéditeur.



Hi,

Could you give more details about the use of tables and modeling about this 
single node cassandra ?

Have you began to use Cassandra with 3 version or have you already migrate 
before from previous version ( 2.x) ?

To be honest, i would suggest to use the last release avalable, and to rebuild 
and relaad a fresh new cluster with a very low num_token ( and 3 nodes :-)

May i ask you why only single node cassandra ? Scalability is not  intended ?

Sorry for my poor english :-)

Kind regards

Stephane




Le 19/03/2025 à 14:15, William Crowell via user a écrit :

Bowen, Fabien, Stéphane, and Luciano,

A bit more information here...

We have not run incremental repairs, and we have not made any changes to the 
compression properties on the tables.

When we first started the database the TTL on the records was set to 0 but not 
it is set to 10 days.

We do have one table in a keyspace that is occupying 84.1GB of disk space:

ls -l /var/lib/cassandra/data/keyspace1/table1
…
-rw-rw-r--. 1 xxxxxxxx xxxxxxxxx 84145170181 Mar 18 08:28 nb-163033-big-Data.db
…

Regards,

William Crowell


From: William Crowell via user 
<user@cassandra.apache.org><mailto:user@cassandra.apache.org>
Date: Friday, March 14, 2025 at 10:53 AM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org> 
<user@cassandra.apache.org><mailto:user@cassandra.apache.org>
Cc: William Crowell <wcrow...@perforce.com><mailto:wcrow...@perforce.com>, 
Bowen Song <bo...@bso.ng><mailto:bo...@bso.ng>
Subject: Re: Increased Disk Usage After Upgrading From Cassandra 3.x.x to 4.1.3
Bowen,

This is just a single Cassandra node.  Unfortunately, I cannot get on the box 
at the moment, but the following configuration is in cassandra.yaml:

snapshot_before_compaction: false
auto_snapshot: true
incremental_backups: false

The only other configuration parameter that had been changed other than the 
keystore and truststore was num_tokens (default: 16):

num_tokens: 256

I also noticed the compression ratio on the largest table is not good:  
0.566085855123187

Regards,

William Crowell

From: Bowen Song via user 
<user@cassandra.apache.org><mailto:user@cassandra.apache.org>
Date: Friday, March 14, 2025 at 10:13 AM
To: William Crowell via user 
<user@cassandra.apache.org><mailto:user@cassandra.apache.org>
Cc: Bowen Song <bo...@bso.ng><mailto:bo...@bso.ng>
Subject: Re: Increased Disk Usage After Upgrading From Cassandra 3.x.x to 4.1.3

A few suspects:

* snapshots, which could've been created automatically, such as by dropping or 
truncating tables when auto_snapshots is set to true, or compaction when 
snapshot_before_compaction is set to true

* backups, which could've been created automatically, e.g. when 
incremental_backups is set to true

* mixing repaired and unrepaired sstables, which is usually caused by 
incremental repairs, even if it had only been ran once

* partially upgraded cluster, e.g. mixed Cassandra version in the same cluster

* token ring change (e.g. adding or removing nodes) without "nodetool cleanup"

* actual increase in data size

* changes made to the compression table properties



To find the root cause, you will need to check the file/folder sizes to find 
out what is using the extra disk space, and may also need to review the 
cassandra.yaml file (or post it here with sensitive information removed) and 
any actions you've made to the cluster prior to the first appearance of the 
issue.



Also, manually running major compactions is no advised.
On 12/03/2025 20:26, William Crowell via user wrote:
Hi.  A few months ago, I upgraded a single node Cassandra instance from version 
3 to 4.1.3.  This instance is not very large with about 15 to 20 gigabytes of 
data on version 3, but after the update it has went substantially up to over 
100gb.  I do a compaction once a week and take a snapshot, but with the 
increase in data it makes the compaction a much lengthier process.  I also did 
a sstableupate as part of the upgrade.  Any reason for the increased size of 
the database on the file system?

I am using the default STCS compaction strategy.  My “nodetool cfstats” on a 
heavily used table looks like this:

Keyspace : xxxxxxxx
        Read Count: 48089
        Read Latency: 12.52872569610514 ms
        Write Count: 1616682825
        Write Latency: 0.0067135265490310386 ms
        Pending Flushes: 0
                Table: sometable
                SSTable count: 13
                Old SSTable count: 0
                Space used (live): 104005524836
                Space used (total): 104005524836
                Space used by snapshots (total): 0
                Off heap memory used (total): 116836824
                SSTable Compression Ratio: 0.566085855123187
                Number of partitions (estimate): 14277177
                Memtable cell count: 81033
                Memtable data size: 13899174
                Memtable off heap memory used: 0
                Memtable switch count: 13171
                Local read count: 48089
                Local read latency: NaN ms
                Local write count: 1615681213
                Local write latency: 0.005 ms
                Pending flushes: 0
                Percent repaired: 0.0
                Bytes repaired: 0.000KiB
                Bytes unrepaired: 170.426GiB
                Bytes pending repair: 0.000KiB
                Bloom filter false positives: 125
                Bloom filter false ratio: 0.00494
                Bloom filter space used: 24656936
                Bloom filter off heap memory used: 24656832
                Index summary off heap memory used: 2827608
                Compression metadata off heap memory used: 89352384
                Compacted partition minimum bytes: 73
                Compacted partition maximum bytes: 61214
                Compacted partition mean bytes: 11888
                Average live cells per slice (last five minutes): NaN
                Maximum live cells per slice (last five minutes): 0
                Average tombstones per slice (last five minutes): NaN
                Maximum tombstones per slice (last five minutes): 0
                Dropped Mutations: 0
                Droppable tombstone ratio: 0.04983



This e-mail may contain information that is privileged or confidential. If you 
are not the intended recipient, please delete the e-mail and any attachments 
and notify us immediately.


CAUTION: This email originated from outside of the organization. Do not click 
on links or open attachments unless you recognize the sender and know the 
content is safe.




This e-mail may contain information that is privileged or confidential. If you 
are not the intended recipient, please delete the e-mail and any attachments 
and notify us immediately.



CAUTION: This email originated from outside of the organization. Do not click 
on links or open attachments unless you recognize the sender and know the 
content is safe.




This e-mail may contain information that is privileged or confidential. If you 
are not the intended recipient, please delete the e-mail and any attachments 
and notify us immediately.

--
[cid:image001.gif@01DB998E.E07855B0]

Reply via email to