Marcus, thanks a lot! It explains a lot those huge tables are indeed at L0.
It seems that they start to appear as a result of some "massive" operations (join, repair, rebuild). What's their fate in the future? Will they continue to propagate like this through levels? Is there anything that can be done to avoid/solve/prevent this? My fears here are around a feeling that those big tables (like in my "old" cluster) will be hardly compactable in the future... Sincerely, Andrei. On Tue, Nov 18, 2014 at 4:27 PM, Marcus Eriksson <krum...@gmail.com> wrote: > I suspect they are getting size tiered in L0 - if you have too many sstables > in L0, we will do size tiered compaction on sstables in L0 to improve > performance > > Use tools/bin/sstablemetadata to get the level for those sstables, if they > are in L0, that is probably the reason. > > /Marcus > > On Tue, Nov 18, 2014 at 2:06 PM, Andrei Ivanov <aiva...@iponweb.net> wrote: >> >> Dear all, >> >> I have the following problem: >> - C* 2.0.11 >> - LCS with default 160MB >> - Compacted partition maximum bytes: 785939 (for cf/table xxx.xxx) >> - Compacted partition mean bytes: 6750 (for cf/table xxx.xxx) >> >> I would expect the sstables to be of +- maximum 160MB. Despite this I >> see files like: >> 192M Nov 18 13:00 xxx-xxx-jb-15580-Data.db >> or >> 631M Nov 18 13:03 xxx-xxx-jb-15583-Data.db >> >> Am I missing something? What could be the reason? (Actually this is a >> "fresh" cluster - on an "old" one I'm seeing 500GB sstables). I'm >> getting really desperate in my attempt to understand what's going on. >> >> Thanks in advance Andrei. > >