Setting them to 2 and 2 means compaction can only ever compact 2 files at time, 
so it will be worse off.

Lets the try following:

- restore the compactions settings to the default 4 and 32
- run `ls -lah` in the data dir and grab the output
- run `nodetool flush` this will trigger minor compaction once the memtables 
have been flushed
- check the logs for messages from 'CompactionManager'
- when done grab the output from  `ls -lah` again. 

Hope that helps. 

 
-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 23 Jun 2011, at 02:04, Héctor Izquierdo Seliva wrote:

> Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
> to run compact, but it's not doing anything. There are over 69 sstables
> now, read performance is horrible, and it's taking an insane amount of
> space. Maybe I don't quite get how the new per bucket stuff works, but I
> think this is not normal behaviour.
> 
> El lun, 13-06-2011 a las 10:32 -0500, Jonathan Ellis escribió:
>> As Terje already said in this thread, the threshold is per bucket
>> (group of similarly sized sstables) not per CF.
>> 
>> 2011/6/13 Héctor Izquierdo Seliva <izquie...@strands.com>:
>>> I was already way over the minimum. There were 12 sstables. Also, is
>>> there any reason why scrub got stuck? I did not see anything in the
>>> logs. Via jmx I saw that the scrubbed bytes were equal to one of the
>>> sstables size, and it stuck there for a couple hours .
>>> 
>>> El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
>>>> That most likely happened just because after scrub you had new files
>>>> and got over the "4" file minimum limit.
>>>> 
>>>> https://issues.apache.org/jira/browse/CASSANDRA-2697
>>>> 
>>>> Is the bug report.
>>>> 
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> 
> 
> 

Reply via email to