Hello community,
BTW I am using Cassandra 3.11.4. From your comments, I understand that a CPU
spike and maybe a long GC may be expected at the snapshot creation under
specific circumstances. I will monitor the resources during snapshot creation.
I will come back with more news.
Thanks a lot for
Beyond this there are two decent tuning sets, but relatively dated at this pointCassandra-8150 proposed a number of changes to defaults based on how it had been tuned at a specific large (competent) user:ASF JIRAissues.apache.orgAny Tobey wrote this guide around the 2.0/2.1 era, so it assumes thing
Amy's Guide. Still getting it done after all these years. Legendary.
On Tue, Sep 20, 2022 at 6:05 AM Jeff Jirsa wrote:
> Beyond this there are two decent tuning sets, but relatively dated at this
> point
>
> Cassandra-8150 proposed a number of changes to defaults based on how it
> had been tuned
Hi All
In one of our cluster, read request with consistency "LOCAL_QUORUM" is
going across DC. When we run query setting CONSISTECY to LOCAL_QUORUM in
cqlsh, with tracing on, we see READ and digest request sent across to nodes
on other DC. I have checked gossipinfo, peers table, nodetool status.
It sounds like read-repair chance is enabled on the table. Check the table
schema for a non-zero read_repair_chance. Cheers!
>
Thanks Erick for the response.
read_repair_chance is 0. Can speculative_retry cause this? We have that set
at 99 percentile.
Regards
Manish
On Wed, Sep 21, 2022 at 11:17 AM Erick Ramirez
wrote:
> It sounds like read-repair chance is enabled on the table. Check the table
> schema for a non-zero