> We are entirely IO-bound. What killed us last week were to many reads 
> combined with flushes and compactions. Reducing compaction priority helped 
> but it was not enough. Them main problem why we could not add nodes though 
> had to do with the quorum reads we are doing:

I'm going to respond to this separately, and a bit unrelated to what
the purpose of your thread is: How far are you away from being I/O
bound (say in terms of % utilization - last column of iostat -x 1 -
assuming you don't have a massive RAID underneath the block device)
when compaction/AESis *not* running? I.e., how much in relative terms,
in terms of "time spent by disks servicing requests" is added by
compaction/AES?

Are your values in generally largish (say a few kb or some such)or
very small (5-50 bytes) or somewhere in between? I've been trying to
collect information when people report compaction/repair killing their
performance. My hypothesis is that most sever issues are for data sets
where compaction becomes I/O bound rather than CPU bound (for those
that have seen me say this a gazillion times I must be sounding like
I'm a stuck LP record); and this would tend to be expected with larger
and fewer values as opposed to smaller and more numerous values as the
latter is much more expensive in terms of CPU cycles per byte
compacted. Further I expect CPU bound compaction to be a problem very
infrequently in comparison. I'm trying to confirm or falsify the
hypothesis.

-- 
/ Peter Schuller

Reply via email to