Hi Erick!
Just follow up to your statement:
Limiting the seeds to 2 per DC means :
A) Each node in a DC has at least 2 seeds and those seeds belong to the
same DC
or
B) Each node in a DC has at least 2 seeds even across different DC
Thanks,
Sergio
Il giorno gio 13 feb 2020 alle ore 19:46 Er
Seed node doesn’t bootstrap so if new node were to act as seed node, official
recommendations are to boot strap ‘new’ node first , only after that list that
node as seed.
Seed nodes are usually same across cluster nodes. You can designate 2 nodes as
seed per dc in order to mitigate network lat
>
> Just follow up to your statement:
> Limiting the seeds to 2 per DC means :
> A) Each node in a DC has at least 2 seeds and those seeds belong to the
> same DC
> or
> B) Each node in a DC has at least 2 seeds even across different DC
>
I apologise for the ambiguity of my previous response, I se
Thanks all for your support.
I executed the discussed process (barring repair, as table was read for
reporting only) and it worked fine in production.
Regards
Manish
>
Hi,
We have 2 datacenters in our cassandra cluster.
Whenever a node goes down in DC1 and hints gets collected in all other
nodes then what we have noticed is that hints replay is very very slow in
DC1 node but if a node goes down in DC2 and comes back, hints replay fast.
We are on 3.11.0
We are u
Krish, with the limited info and assuming things like hint throttle and
delivery threads all being equal, my guess would be DC1 is your primary DC
and is busier than DC2. Got any diagnostic data/troubleshooting info you
could share? Otherwise, it's a little difficult to speculate as to what may
be
DC2 is our main datacenter which serves all the traffic.
This cluster has Materialized views.
On Tue, Feb 25, 2020 at 9:32 PM Erick Ramirez
wrote:
> Krish, with the limited info and assuming things like hint throttle and
> delivery threads all being equal, my guess would be DC1 is your primary
What's the reason for nodes going down? Is it because the cluster is
overloaded? Hints will get handed off periodically when nodes come back to
life but if they happen to go down again or become unresponsive (for
whatever reason), the handoff will be delayed until the next cycle. I think
it's every