Calling all super heros.
I have a long standing Cassandra 2.1.12 ring that has an occasional node that
gets restarted and then is flagged with the invalid gossip generation error
leaving him down in nodetool status but the logs make it look like the nodes is
ok.
It’s only when I look at the ot
Hi,
To check that repairs are running, you should be able to see VALIDATION
compactions (using 'nodetool compactionstats -H'). You could also see some
streaming if you had some entropy (using 'nodetool netstats -H | grep -v
100%') during the repair.
To know roughly the state of your data, you ca
We are Japan Cassandra User's Group.
We announce the Cassandra Summit Tokyo 2017 that will be held on
Tuesday October 5th at the Bellesalle TokyoNihonbashi in Nihonbashi,
Tokyo.
This summit is a one-day summit with several Japanese and English sessions.
There are various kinds of sessions includi
data will be distributed amongst racks correctly, however only if you are
using a snitch that understands racks and also NetworkTopologyStrategy.
SimpleStrategy doesn't understand racks or DCs. You should use a snitch
that understands racks and then transition to a 2 rack cluster, keeping
only 1 DC
Hi list,
CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy',
'replication_factor': '2'} AND durable_writes = true;
I'm using C* 3.11.0 with 8 nodes at aws, 4 nodes at zone a and the other
4 nodes at zone b. The idea is to keep the cluster alive if zone a or b
goes dark and keep Q