Other little update: at the same time I see the number of pending tasks
stuck (in this case at 1847); restarting the node doesn't help, so I can't
really force the node to "digest" all those compactions. In the meanwhile
the disk occupied is already twice the average load I have on other nodes.

Feeling more and more puzzled here :S

On Fri, Oct 13, 2017 at 1:28 PM, Stefano Ortolani <ostef...@gmail.com>
wrote:

> I have been trying to add another node to the cluster (after upgrading to
> 3.0.15) and I just noticed through "nodetool netstats" that all nodes have
> been streaming to the joining node approx 1/3 of their SSTables, basically
> their whole primary range (using RF=3)?
>
> Is this expected/normal?
> I was under the impression only the necessary SSTables were going to be
> streamed...
>
> Thanks for the help,
> Stefano
>
>
> On Wed, Aug 23, 2017 at 1:37 PM, kurt greaves <k...@instaclustr.com>
> wrote:
>
>> But if it also streams, it means I'd still be under-pressure if I am not
>>> mistaken. I am under the assumption that the compactions are the by-product
>>> of streaming too many SStables at the same time, and not because of my
>>> current write load.
>>>
>> Ah yeah I wasn't thinking about the capacity problem, more of the
>> performance impact from the node being backed up with compactions. If you
>> haven't already, you should try disable stcs in l0 on the joining node. You
>> will likely still need to do a lot of compactions, but generally they
>> should be smaller. The  option is -Dcassandra.disable_stcs_in_l0=true
>>
>>>  I just noticed you were mentioning L1 tables too. Why would that affect
>>> the disk footprint?
>>
>> If you've been doing a lot of STCS in L0, you generally end up with some
>> large SSTables. These will eventually have to be compacted with L1. Could
>> also be suffering the problem of streamed SSTables causing large
>> cross-level compactions in the higher levels as well.
>> ​
>>
>
>

Reply via email to