As much as sstable files are immutable, there are operations that can
delete them, such as compactions (merges) and upgrades (upgradesstables -
you possibly ran this in your upgrade).

Even though snapshots are hardlinks, when the original sstable file get
deleted, it will actually behave like a copy of the old file as it will
keep pointing to the old inodes.

Luciano Greiner

On Thu, Mar 13, 2025 at 11:23 PM William Crowell <wcrow...@perforce.com>
wrote:

> Luciano,
>
>
>
> That is very possible.  Any reasons why the increased disk space from
> version 3 to 4?  Did anything in particular change that would affect disk
> space?
>
>
>
> Thank you for your reply,
>
>
>
> William Crowell
>
>
>
> *From: *Luciano Greiner <luciano.grei...@gmail.com>
> *Date: *Thursday, March 13, 2025 at 10:21 PM
> *To: *user@cassandra.apache.org <user@cassandra.apache.org>
> *Cc: *William Crowell <wcrow...@perforce.com>
> *Subject: *Re: Increased Disk Usage After Upgrading From Cassandra 3.x.x
> to 4.1.3
>
> You don't often get email from luciano.grei...@gmail.com. Learn why this
> is important <https://aka.ms/LearnAboutSenderIdentification>
>
> Haven't you forgot to clean some snapshots ?
>
>
>
> Luciano Greiner
>
>
>
>
>
>
>
> On Thu, Mar 13, 2025 at 11:18 PM William Crowell via user <
> user@cassandra.apache.org> wrote:
>
> Hi,
>
>
>
> Is this mailing list still active?
>
>
>
> Thanks.
>
>
>
> *From: *William Crowell via user <user@cassandra.apache.org>
> *Date: *Wednesday, March 12, 2025 at 4:42 PM
> *To: *user@cassandra.apache.org <user@cassandra.apache.org>
> *Cc: *William Crowell <wcrow...@perforce.com>
> *Subject: *Re: Increased Disk Usage After Upgrading From Cassandra 3.x.x
> to 4.1.3
>
> I also forgot to include we do compaction once a week.
>
>
>
> Hi.  A few months ago, I upgraded a single node Cassandra instance from
> version 3 to 4.1.3.  This instance is not very large with about 15 to 20
> gigabytes of data on version 3, but after the update it has went
> substantially up to over 100gb.  I do a compaction once a week and take a
> snapshot, but with the increase in data it makes the compaction a much
> lengthier process.  I also did a sstableupate as part of the upgrade.  Any
> reason for the increased size of the database on the file system?
>
>
>
> I am using the default STCS compaction strategy.  My “nodetool cfstats” on
> a heavily used table looks like this:
>
>
>
> Keyspace : xxxxxxxx
>
>         Read Count: 48089
>
>         Read Latency: 12.52872569610514 ms
>
>         Write Count: 1616682825
>
>         Write Latency: 0.0067135265490310386 ms
>
>         Pending Flushes: 0
>
>                 Table: sometable
>
>                 SSTable count: 13
>
>                 Old SSTable count: 0
>
>                 Space used (live): 104005524836
>
>                 Space used (total): 104005524836
>
>                 Space used by snapshots (total): 0
>
>                 Off heap memory used (total): 116836824
>
>                 SSTable Compression Ratio: 0.566085855123187
>
>                 Number of partitions (estimate): 14277177
>
>                 Memtable cell count: 81033
>
>                 Memtable data size: 13899174
>
>                 Memtable off heap memory used: 0
>
>                 Memtable switch count: 13171
>
>                 Local read count: 48089
>
>                 Local read latency: NaN ms
>
>                 Local write count: 1615681213
>
>                 Local write latency: 0.005 ms
>
>                 Pending flushes: 0
>
>                 Percent repaired: 0.0
>
>                 Bytes repaired: 0.000KiB
>
>                 Bytes unrepaired: 170.426GiB
>
>                 Bytes pending repair: 0.000KiB
>
>                 Bloom filter false positives: 125
>
>                 Bloom filter false ratio: 0.00494
>
>                 Bloom filter space used: 24656936
>
>                 Bloom filter off heap memory used: 24656832
>
>                 Index summary off heap memory used: 2827608
>
>                 Compression metadata off heap memory used: 89352384
>
>                 Compacted partition minimum bytes: 73
>
>                 Compacted partition maximum bytes: 61214
>
>                 Compacted partition mean bytes: 11888
>
>                 Average live cells per slice (last five minutes): NaN
>
>                 Maximum live cells per slice (last five minutes): 0
>
>                 Average tombstones per slice (last five minutes): NaN
>
>                 Maximum tombstones per slice (last five minutes): 0
>
>                 Dropped Mutations: 0
>
>                 Droppable tombstone ratio: 0.04983
>
>
>
>
>
> *This e-mail may contain information that is privileged or confidential.
> If you are not the intended recipient, please delete the e-mail and any
> attachments and notify us immediately.*
>
>
>
> *CAUTION:* This email originated from outside of the organization. Do not
> click on links or open attachments unless you recognize the sender and know
> the content is safe.
>
>
>
>
>
> *This e-mail may contain information that is privileged or confidential.
> If you are not the intended recipient, please delete the e-mail and any
> attachments and notify us immediately.*
>
>
>
> *CAUTION:* This email originated from outside of the organization. Do not
> click on links or open attachments unless you recognize the sender and know
> the content is safe.
>
>
>
>
>
> *This e-mail may contain information that is privileged or confidential.
> If you are not the intended recipient, please delete the e-mail and any
> attachments and notify us immediately.*
>
>
>
>
>
> *CAUTION:* This email originated from outside of the organization. Do not
> click on links or open attachments unless you recognize the sender and know
> the content is safe.
>
>
>
> This e-mail may contain information that is privileged or confidential. If
> you are not the intended recipient, please delete the e-mail and any
> attachments and notify us immediately.
>
>

Reply via email to