Hi all,
I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so
far.
Based on the source code it seems that this option doesn't affect
compactions while bootstrapping.
I am getting quite confused as it seems I am not able to bootstrap a node
if I don't have at least 6/7 times the
What version?
Single disk or JBOD?
Vnodes?
--
Jeff Jirsa
> On Oct 15, 2017, at 12:49 PM, Stefano Ortolani wrote:
>
> Hi all,
>
> I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so far.
> Based on the source code it seems that this option doesn't affect compactions
>
Hi Jeff,
that would be 3.0.15, single disk, vnodes enabled (num_tokens 256).
Stefano
On Sun, Oct 15, 2017 at 9:11 PM, Jeff Jirsa wrote:
> What version?
>
> Single disk or JBOD?
>
> Vnodes?
>
> --
> Jeff Jirsa
>
>
> On Oct 15, 2017, at 12:49 PM, Stefano Ortolani wrote:
>
> Hi all,
>
> I have b
Can you post (anonymize as needed) nodetool status, nodetool netstats, nodetool
tpstats, and nodetool compctionstats ?
--
Jeff Jirsa
> On Oct 15, 2017, at 1:14 PM, Stefano Ortolani wrote:
>
> Hi Jeff,
>
> that would be 3.0.15, single disk, vnodes enabled (num_tokens 256).
>
> Stefano
>
>>
Hi Jeff,
this my third attempt bootstrapping the node so I tried several tricks that
might partially explain the output I am posting.
* To make the bootstrap incremental, I have been throttling the streams on
all nodes to 1Mbits. I have selectively unthrottling one node at a time
hoping that woul
I
You’re adding the new node as rac3
The rack aware policy is going to make sure you get the rack diversity you
asked for by making sure one replica of each partition is in rac3, which is
going to blow up that instance
--
Jeff Jirsa
> On Oct 15, 2017, at 1:42 PM, Stefano Ortolani wrote:
>
(Should still be able to complete, unless you’re running out of disk or memory
or similar, but that’s why it’s streaming more than you expect)
--
Jeff Jirsa
> On Oct 15, 2017, at 1:51 PM, Jeff Jirsa wrote:
>
> I
> You’re adding the new node as rac3
>
> The rack aware policy is going to mak
Nice catch!
I’ve totally overlooked it.
Thanks a lot!
Stefano
On Sun, 15 Oct 2017 at 22:14, Jeff Jirsa wrote:
> (Should still be able to complete, unless you’re running out of disk or
> memory or similar, but that’s why it’s streaming more than you expect)
>
>
> --
> Jeff Jirsa
>
>
> On Oct 15,
Just did a restart on a node I'm upgrading from 3.11.0 to 3.11.1 and I am a
delayed startup due to a large sequence of these type of messages::
WARN [main] 2017-10-16 12:53:16,040 QueryProcessor.java:160 - prepared
statement recreation error: select link, processed from to_editor where
timebl
It's been renamed to Number of partitions.
If you create a new cluster and mimic the tokens across less nodes you will
still have downtime/missing data between the point when you copy all the
SSTables across and any new writes to the old cluster after you take the
copy.
Only way to really do this effectively is to do a DC migration. Brief r
I believe that's the decompressed data size, so if your data is heavily
compressed it might be perfectly logical for you to be doing such large
compactions. Worth checking what SSTables are included in the compaction.
If you've been running STCS for a while you probably just have a few very
large S
12 matches
Mail list logo