Yes, many sstables can have a huge negative impact read performance, and
will also create memory pressure on that node.

There are a lot of things which can produce this effect, and it strongly
also suggests you're falling behind on compaction in general (check
nodetool compactionstats, you should have <5 outstanding/pending,
preferably 0-1).  To see whether and how much it is impacting your read
performance, check nodetool cfstats <keyspace.table> and nodetool
cfhistograms <keyspace> <table>.


On Thu, Jan 15, 2015 at 2:11 AM, Roland Etzenhammer <
r.etzenham...@t-online.de> wrote:

> Hi,
>
> I'm testing around with cassandra fair a bit, using 2.1.2 which I know has
> some major issues,but it is a test environment. After some bulk loading,
> testing with incremental repairs and running out of heap once I found that
> now I have a quit large number of sstables which are really small:
>
> <1k              0      0,0%
> <10k          2780     76,8%
> <100k         3392     93,7%
> <1000k        3461     95,6%
> <10000k       3471     95,9%
> <100000k      3517     97,1%
> <1000000k     3596     99,3%
> all           3621    100,0%
>
> 76,8% of all sstables in this particular column familiy are smaller that
> 10kB, 93.7% are smaller then 100kB.
>
> Just for my understanding - does that impact performance? And is there any
> way to reduce the number of sstables? A full run of nodetool compact is
> running for a really long time (more than 1day).
>
> Thanks for any input,
> Roland
>

Reply via email to