Thanks Jeff, that’s the type of parameter I was looking for but I missed it
when I first read it. We’ll ensure that dynamic snitch is enabled.
—
Cyril Scetbon
> On Aug 5, 2019, at 11:23 PM, Jeff Jirsa wrote:
>
> You can make THAT less likely with some snitch trickery (setting the badness
> for
Hi Cyril,
it will depend on the load balancing policy that is used in the client code.
If you're only accessing DC1, with the node being rebuilt living in DC2,
then you need your clients to be using the DCAwareRoundRobinPolicy to
restrict connections to DC1 and avoid all kind of queries hitting D
We have clients in all our DCs.
Rebuild has always been much faster for us than repairs. It operates as
bootstrap by streaming data from only one source replica for each token range
(need to do a cleanup if run multiple times). Repair is a different operation
and is not supposed to be run on an
Can you elaborate on that ? We use GPFS without cassandra-topology.properties.
—
Cyril Scetbon
> On Aug 5, 2019, at 11:23 PM, Jeff Jirsa wrote:
>
> some snitch trickery (setting the badness for the rebuilding host) via jmx
Assuming the rebuild is happening on a node in another DC, then there
should not be an issue if you are using LOCAL_ONE. If the node is in the
local DC (i.e., same DC as the client), I am inclined to think repair would
be more appropriate than rebuild but I am not 100% certain.
On Mon, Aug 5, 2019
No, not strictly sufficient - makes it much less likely though
A client may connect to another node and still send the request to that host if
the snitch picks it. You can make THAT less likely with some snitch trickery
(setting the badness for the rebuilding host) via jmx
> On Aug 5, 2019, at
Hey guys,
Can you confirm that disabling the native transport (nodetool disablebinary) is
enough with Cassandra 3.11+ to avoid clients hitting inconsistent data on that
node when they use LOCAL_ONE consistency ? (Particularly when the node is
rebuilding …)
I'd like to avoid any fancy client co