Thanks for the report back.
If LCS falls back to SizeTiered it means that you have a workload that
exposes intensive bursts on write. Maybe solving this by would be better
than hard tweaking the LCS code
Le 12 mai 2014 17:19, "Yatong Zhang" a écrit :
Well, I finally resolved this issue by modify
Well, I finally resolved this issue by modifying cassandra to ignore
sstables that had size bigger than a threshold.
The leveled compaction will fall back to sized tiered compaction in some
situation and that's why I always got some old huge sstables compacted.
More details can be found in 'Levele
I am using the latest 2.0.7. The 'nodetool tpstats' shows as:
[root@storage5 bin]# ./nodetool tpstats
> Pool NameActive Pending Completed Blocked
> All time blocked
> ReadStage 0 0 628220
> 0 0
> RequestResponseSt
The symptoms looks like there are pending compactions stacking up or failed
compactions so temporary files (-tmp-Data.db) are not properly cleaned up.
What is your Cassandra version ? Can you do a "nodetool tpstats" and look
into Cassandra logs to see whether there are issues with compactions ?
Yes after a while the disk fills up again. So I changed the compaction
strategy from 'sized tiered' to 'leveled' to reduce the disk usage when
compacting, but the problem still occurs.
This table has lots of write and a relative very small read, and no update.
here is the schema of the table:
CRE
And after a while the /data6 drive fills up again right ?
One question, can you please give the CQL3 definition of your
"mydb-images-tmp"
table ?
What is the access pattern for this table ? Lots of write ? Lots of update ?
On Sun, May 4, 2014 at 10:00 AM, Yatong Zhang wrote:
> after restar
after restarting or 'cleanup' the big tmp file has gone and all looks like
fine:
-rw-r--r-- 1 root root 19K Apr 30 13:58
> mydb_oe-images-tmp-jb-96242-CompressionInfo.db
> -rw-r--r-- 1 root root 145M Apr 30 13:58
> mydb_oe-images-tmp-jb-96242-Data.db
> -rw-r--r-- 1 root root 64K Apr 30 13:58
> m
Hello Yatong
"If I restart the node or using 'cleanup', it will resume to normal." -->
what does df -hl shows for /data6 when you restart or cleanup the node ?
By the way, a single SSTable of 3.6Tb is kind of huge. Do you perform
manual repair frequently ?
On Sun, May 4, 2014 at 1:51 AM, Yat
My Cassandra cluster has plenty of free space, for now only about 30% of
space are used
On Sun, May 4, 2014 at 6:36 AM, Yatong Zhang wrote:
> Hi there,
>
> It was strange that the 'xxx-tmp-xxx.db' file kept increasing until
> Cassandra throw exceptions with 'No space left on device'. I am using