ction to chew through that. If you added a bunch of nodes in
>> sequence,
>> >> you’d have 5k on the first node, then potentailly 10k on the next, and
>> could
>> >> potentially keep increasing as you start streaming from nodes that
>> have way
>> >
> Also related to this point, now I'm seeing something even more odd:
> some
> >> > compactions are way bigger than the size of the column family itself,
> such
> >> > as:
> >>
> >> The size reported by compactionstats is the uncompressed
pressed size – if you’re
>> using compression, it’s perfectly reasonable for 30G of data to show up as
>> 118G of data during compaction.
>>
>> - Jeff
>>
>> From: Gianluca Borello
>> Reply-To: "user@cassandra.apache.org"
>> Date: Monday, Mar
e size reported by compactionstats is the uncompressed size – if you’re
> using compression, it’s perfectly reasonable for 30G of data to show up as
> 118G of data during compaction.
>
> - Jeff
>
> From: Gianluca Borello
> Reply-To: "user@cassandra.apache.org"
> Date:
rfectly reasonable for 30G of data to show up as 118G of
data during compaction.
- Jeff
From: Gianluca Borello
Reply-To: "user@cassandra.apache.org"
Date: Monday, March 21, 2016 at 12:50 PM
To: "user@cassandra.apache.org"
Subject: Pending compactions not going down on some
On Mon, Mar 21, 2016 at 12:50 PM, Gianluca Borello
wrote:
>
> - It's also interesting to notice how the compaction in the previous
> example is trying to compact ~37 GB, which is essentially the whole size of
> the column family message_data1 as reported by cfstats:
>
Also related to this point,
On Mon, Mar 21, 2016 at 2:15 PM, Alain RODRIGUEZ wrote:
>
> What hardware do you use? Can you see it running at the limits (CPU /
> disks IO)? Is there any error on system logs, are disks doing fine?
>
>
Nodes are c3.2xlarge instances on AWS. The nodes are relatively idle, and,
as said in the ori
Hi, thanks for the detailed information, it is useful.
SSTables in each level: [43/4, 92/10, 125/100, 0, 0, 0, 0, 0, 0]
Looks like compaction is not doing so hot indeed.
What hardware do you use? Can you see it running at the limits (CPU / disks
IO)? Is there any error on system logs, are disks
Hi,
We added a bunch of new nodes to a cluster (2.1.13) and everything went
fine, except for the number of pending compactions that is staying quite
high on a subset of the new nodes. Over the past 3 days, the pending
compactions have never been less than ~130 on such nodes, with peaks of
~200. On