Assuming your total disk space is a lot bigger than 50GB in size
(accounting for disk space amplification, commit log, logs, OS data,
etc.), I would suspect the disk space is being used by something else.
Have you checked that the disk space is actually being used by the
cassandra data director
Yes i checked and cleared all snapshots and also i had incremental backups
in backup folder ..i removed the same .. its purely data..
On Friday, September 17, 2021, Bowen Song wrote:
> Assuming your total disk space is a lot bigger than 50GB in size
> (accounting for disk space amplification, c
Okay, so how big exactly is the data on disk? You said removing and
adding a new node gives you 20GB on disk, was that done via the
'-Dcassandra.replace_address=...' parameter? If not, the new node will
almost certainly have a different token range and not directly
comparable to the existing no
Hello Abdul,
Adding to what Bowen already shared for snapshots.
Assuming that you're not just amplifying disk space by updating\deleting
existing data many times, these are the following things that you should
consider:
* Manual snapshots
* Check (nodetool listsnapshots) and remove
Close 300 gb data. Nodetool decommission/removenode and added back one node
ans it came back to 22Gb.
Cant run major compaction as no space much left.
On Friday, September 17, 2021, Bowen Song wrote:
> Okay, so how big exactly is the data on disk? You said removing and adding
> a new node gives
If major compaction is failing due to disk space constraint, you could
copy the files to another server and run a major compaction there
instead (i.e.: start cassandra on new server but not joining the
existing cluster). If you must replace the node, at least use the
'-Dcassandra.replace_addres
Thank for the help,
How does the compaction run? Does it clean old compaction files while running
or only at the end, I want to manage the free space so not run out while its
running?
From: Jim Shaw
Sent: Wednesday, September 15, 2021 3:49 PM
To: user@cassandra.apache.org
Subject: Re: TWCS on
If you use TWCS with TTL, the old SSTables won't be compacted, the
entire SSTable file will get dropped after it expires. I don't think you
will need to manage the compaction or cleanup at all, as they are
automatic. There's no space limit on the table holding the near-term
data other than the
Thanks.
Application deletes data every 48hrs of older data.
Auto compaction works but as space is full ..errorlog only says not enough
space to run compaction.
On Friday, September 17, 2021, Bowen Song wrote:
> If major compaction is failing due to disk space constraint, you could
> copy the fi
Congratulation! You've just found out the cause of it. Does all data get
deletes 48 hours after they are inserted? If so, are you sure LCS is the
right compaction strategy for this table? TWCS sounds like a much better
fit for this purpose.
On 17/09/2021 19:16, Abdul Patel wrote:
Thanks.
Appl
I upgraded a test cluster from 3.11.11 to 4.0.1 today and the following
message started appearing the logs fairly regularly:
INFO [Native-Transport-Requests-1] ShortReadPartitionsProtection.java:153
- Requesting 4951 extra rows from
Full(/10.132.39.33:25469,(8972614703113750908,-91638223484521669
Short read protection is a feature added in 3.0 to work around a possible
situation in 2.1 where we could fail to return all rows in a result
The basic premise is that when you read, we ask for the same number of rows
from all of the replicas involved in the query. It’s possible, with the righ
48hrs deletion is deleting older data more than 48hrs .
LCS was used as its more of an write once and read many application.
On Friday, September 17, 2021, Bowen Song wrote:
> Congratulation! You've just found out the cause of it. Does all data get
> deletes 48 hours after they are inserted? If
Twcs is best for TTL not for excipilitly delete correct?
On Friday, September 17, 2021, Abdul Patel wrote:
> 48hrs deletion is deleting older data more than 48hrs .
> LCS was used as its more of an write once and read many application.
>
> On Friday, September 17, 2021, Bowen Song wrote:
>
>>
14 matches
Mail list logo