Hi Voytek,
I looked into this a little while ago, and couldn’t really find a definitive
answer. We ended up keeping the GossipingPropertyFileSnitch in our GCP
Datacenter, the only downside that I could see is that you have to manually
specify the rack and DC. But doing it that way does allow yo
Hi,
We need to enable CDC in one of the cluster which is on DSE 5.1. We need to
change below settings :
cdc_enabled
cdc_raw_directory
cdc_total_space_in_mb
cdc_free_space_check_interval_ms
What is the value you keep it for below?
cdc_total_space_in_mb
cdc_free_space_check_interval_ms
Is there an
Another way to purge gossip info from each node is to:
1. Gracefully stop cassandra i.e. nodetool drain; kill Casandra PID
2. Move/delete files from $DATADIR/system/peers/
3. Add JVM_OPTS="$JVM_OPTS -Dcassandra.load_ring_state=false" in jvm.options
file
4. Restart Cassandra service.
5.
Thank you Romain
On Sat, Jul 27, 2019 at 1:42 AM Romain Hardouin
wrote:
> Hi,
>
> Here are some upgrade options:
> - Standard rolling upgrade: node by node
>
> - Fast rolling upgrade: rack by rack.
> If clients use CL=LOCAL_ONE then it's OK as long as one rack is UP.
> For higher CL it's pos
Just a quick bump - hoping someone can shed some light on whether running
different snitches in different datacenters is a terrible idea or no. It'd
be fairly temporary, once the new DC is stood up and nodes are rebuilt, the
old DC will be decomissioned.
On Thu, Jul 25, 2019 at 12:36 PM Voytek Jar
Is there workaround to shorten 72 hours to something shorter?(you said by
default, wondering if one can set a non-default value?)
Thanks,
Yuping
On Jul 29, 2019, at 7:28 AM, Oleksandr Shulgin
wrote:
> On Mon, Jul 29, 2019 at 1:21 PM Rahul Reddy wrote:
>
> Decommissioned 2 nodes from clust
We have the same issue. We observed the JMX only cleared after exactly 72 hours
too.
On Jul 29, 2019, at 11:23 AM, Rahul Reddy wrote:
And also system.peers table doesn't have the information on old nodes only
ghost nodes to be there in JMX
> On Mon, Jul 29, 2019, 7:39 AM Rahul Reddy wrote:
And also system.peers table doesn't have the information on old nodes only
ghost nodes to be there in JMX
On Mon, Jul 29, 2019, 7:39 AM Rahul Reddy wrote:
> We removed many times nodes from a cluster but never seen the jmx metric
> down stay for 72 hours. So it has to be completely removed from
Thanks Simon. Really good to know about it. I was trying to configure and
its working.
On Fri, Jul 26, 2019 at 9:56 PM Simon Fontana Oscarsson <
simon.fontana.oscars...@ericsson.com> wrote:
> Hi,
>
> To my knowledge there is no set date for 4.0, the community is
> prioritizing QA over fast releas
We removed many times nodes from a cluster but never seen the jmx metric
down stay for 72 hours. So it has to be completely removed from gossip to
show the metric as expected? This would be problem for using the metric to
alert on call
On Mon, Jul 29, 2019, 7:28 AM Oleksandr Shulgin <
oleksandr.sh
On Mon, Jul 29, 2019 at 1:21 PM Rahul Reddy
wrote:
>
> Decommissioned 2 nodes from cluster nodetool status doesn't list the
> nodes as expected but jmx metrics shows still those 2 nodes has down.
> Nodetool gossip shows the 2 nodes in Left state. Why does my jmx still
> shows those nodes down ev
Hello,
Decommissioned 2 nodes from cluster nodetool status doesn't list the nodes
as expected but jmx metrics shows still those 2 nodes has down. Nodetool
gossip shows the 2 nodes in Left state. Why does my jmx still shows those
nodes down even after 24 hours. Cassandra version 3.11.3 ? Anything
12 matches
Mail list logo