Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
to run compact, but it's not doing anything. There are over 69 sstables
now, read performance is horrible, and it's taking an insane amount of
space. Maybe I don't quite get how the new per bucket stuff works, but I
think this is not normal behaviour.

El lun, 13-06-2011 a las 10:32 -0500, Jonathan Ellis escribió:
> As Terje already said in this thread, the threshold is per bucket
> (group of similarly sized sstables) not per CF.
> 
> 2011/6/13 Héctor Izquierdo Seliva <izquie...@strands.com>:
> > I was already way over the minimum. There were 12 sstables. Also, is
> > there any reason why scrub got stuck? I did not see anything in the
> > logs. Via jmx I saw that the scrubbed bytes were equal to one of the
> > sstables size, and it stuck there for a couple hours .
> >
> > El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
> >> That most likely happened just because after scrub you had new files
> >> and got over the "4" file minimum limit.
> >>
> >> https://issues.apache.org/jira/browse/CASSANDRA-2697
> >>
> >> Is the bug report.
> >>
> >
> >
> >
> >
> 
> 
> 


Reply via email to