Can you post (anonymize as needed) nodetool status, nodetool netstats, nodetool 
tpstats, and nodetool compctionstats ?

-- 
Jeff Jirsa


> On Oct 15, 2017, at 1:14 PM, Stefano Ortolani <ostef...@gmail.com> wrote:
> 
> Hi Jeff,
> 
> that would be 3.0.15, single disk, vnodes enabled (num_tokens 256).
> 
> Stefano
> 
>> On Sun, Oct 15, 2017 at 9:11 PM, Jeff Jirsa <jji...@gmail.com> wrote:
>> What version?
>> 
>> Single disk or JBOD?
>> 
>> Vnodes?
>> 
>> -- 
>> Jeff Jirsa
>> 
>> 
>>> On Oct 15, 2017, at 12:49 PM, Stefano Ortolani <ostef...@gmail.com> wrote:
>>> 
>>> Hi all,
>>> 
>>> I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so 
>>> far. 
>>> Based on the source code it seems that this option doesn't affect 
>>> compactions while bootstrapping.
>>> 
>>> I am getting quite confused as it seems I am not able to bootstrap a node 
>>> if I don't have at least 6/7 times the disk space used by other nodes.
>>> This is weird. The host I am bootstrapping is using a SSD. Also compaction 
>>> throughput is unthrottled (set to 0) and the compacting threads are set to 
>>> 8.
>>> Nevertheless, primary ranges from other nodes are being streamed, but data 
>>> is never compacted away.
>>> 
>>> Does anybody know anything else I could try?
>>> 
>>> Cheers,
>>> Stefano
>>> 
>>>> On Fri, Oct 13, 2017 at 3:58 PM, Stefano Ortolani <ostef...@gmail.com> 
>>>> wrote:
>>>> Other little update: at the same time I see the number of pending tasks 
>>>> stuck (in this case at 1847); restarting the node doesn't help, so I can't 
>>>> really force the node to "digest" all those compactions. In the meanwhile 
>>>> the disk occupied is already twice the average load I have on other nodes.
>>>> 
>>>> Feeling more and more puzzled here :S
>>>> 
>>>>> On Fri, Oct 13, 2017 at 1:28 PM, Stefano Ortolani <ostef...@gmail.com> 
>>>>> wrote:
>>>>> I have been trying to add another node to the cluster (after upgrading to 
>>>>> 3.0.15) and I just noticed through "nodetool netstats" that all nodes 
>>>>> have been streaming to the joining node approx 1/3 of their SSTables, 
>>>>> basically their whole primary range (using RF=3)?
>>>>> 
>>>>> Is this expected/normal? 
>>>>> I was under the impression only the necessary SSTables were going to be 
>>>>> streamed...
>>>>> 
>>>>> Thanks for the help,
>>>>> Stefano
>>>>> 
>>>>> 
>>>>> On Wed, Aug 23, 2017 at 1:37 PM, kurt greaves <k...@instaclustr.com> 
>>>>> wrote:
>>>>>>> But if it also streams, it means I'd still be under-pressure if I am 
>>>>>>> not mistaken. I am under the assumption that the compactions are the 
>>>>>>> by-product of streaming too many SStables at the same time, and not 
>>>>>>> because of my current write load.
>>>>>> 
>>>>>> Ah yeah I wasn't thinking about the capacity problem, more of the 
>>>>>> performance impact from the node being backed up with compactions. If 
>>>>>> you haven't already, you should try disable stcs in l0 on the joining 
>>>>>> node. You will likely still need to do a lot of compactions, but 
>>>>>> generally they should be smaller. The  option is 
>>>>>> -Dcassandra.disable_stcs_in_l0=true
>>>>>>>  I just noticed you were mentioning L1 tables too. Why would that 
>>>>>>> affect the disk footprint?
>>>>>> 
>>>>>> If you've been doing a lot of STCS in L0, you generally end up with some 
>>>>>> large SSTables. These will eventually have to be compacted with L1. 
>>>>>> Could also be suffering the problem of streamed SSTables causing large 
>>>>>> cross-level compactions in the higher levels as well.
>>>>>> ​
>>>>> 
>>>> 
>>> 
> 

Reply via email to