Hey All,
I'm coming back on my own question (see below) as this has happened
again to us 2 days later so we took the time to further analyse this
issue. I'd like to share our experiences and the workaround which we
figured out too.
So to just quickly sum up the most important details again:
Hi,
We have a cluster with one Data Center of 3 nodes in GCP-US(RF=3).Current
apache cassandra version 3.11.6. We are planning to add one new Data Center
of 3 nodes in GCP-India.
At peak hours, files generation in commit logs at GCP-US side on one node
is around 1 GB per minute (i.e 17+ mbps).
Cu
The main downside I see is that you're hitting a less-tested codepath. I
think very few installations have compression disabled today.
On Mon, Jan 25, 2021 at 7:06 AM Lapo Luchini wrote:
> Hi,
> I'm using a fairly standard install of Cassandra 3.11 on FreeBSD
> 12, by default filesystem is
>
>
>> I'm particularly trying to understand the fault-tolerant part of updating
>> Token Ring state on every node
>
> The new node only joins the ring (updates the rings state) when the data
> streaming (bootstrapping) is successful. Otherwise, the existing ring
> remains as is, the joining node r
Thanks Sean, will definitely check this point.
Also, query tracing is allowed from version 4.0.
Is there any way to trace queries in my current version 3.11.6
On Tue 26 Jan, 2021, 20:01 Durity, Sean R (US),
wrote:
> I would be looking at the queries in the application to see if there are
> any c
I would be looking at the queries in the application to see if there are any
cross-partition queries (ALLOW FILTERING or IN clauses across partitions). This
looks like queries that work fine with small scale, but are hitting timeouts
when the data size has increased.
Also see if anyone has ad-ho
Hi,
We have a cluster of 4 nodes all in one DC (apache cass version : 3.11.6).
Things were working fine till last month when all of a sudden we started
facing Operation time outs at client end intermittently.
We have prometheus+grafana configured for monitoring.
On checking we found the following