Hello all,
We use Cassandra in non-conventional way, where our data is short termed (life
cycle of about 20-30 minutes) where each record is updated ~5 times and then
deleted. We have GC grace of 15 minutes.
We are seeing 2 problems
1.) A certain number of Cassandra nodes goes down and then we
Hello,
> There are reports (in this ML too) that disabling dynamic snitching
> decreases response time.
I confirm that I have seen this improvement on clusters under pressure.
What effects stand behind this improvement?
>
My understanding is that this is due to the fact that the clients are t
Hi all,
We have data that gets filled into Hive/ presto every few hours.
We want that data to be transferred to cassandra tables.
What are some of the high performance ETL options for transferring data
between hive or presto into cassandra?
Also does anybody have any performance numbers comparin
Small gc_grace_seconds value lowers max allowed node downtime, which is 15
minutes in your case. After 15 minutes of downtime you'll need to replace the
node, as you described. This interval looks too short to be able to do planned
maintenance. So, in case you set larger value for gc_grace_secon
Does Cassandra TTL out the hints after max_hint_window_in_ms? From my
understanding, Cassandra only stops collecting hints after
max_hint_window_in_ms but can still keep replaying the hints if the node comes
back again. Is this correct? Is there a way to TTL out hints?
Thanks,
Pratik
From: Kyr
Thank you for replying, Alain!
Better use of cache for 'pinned' requests explains good the case when CL=ONE.
But in case of CL=QUORUM/LOCAL_QUORUM, if I'm not wrong, read request is sent
to all replicas waiting for first 2 to reply.
When dynamic snitching is turned on, "data" request is sent
Hello All,
I'm having JVM unstable / OOM errors when attempting to auto bootstrap a
9th node to an existing 8 node cluster (256 tokens). Each machine has 24
cores 148GB RAM and 10TB (2TB used). Under normal operation the 8 nodes
have JVM memory configured with Xms35G and Xmx35G, and handle 2-4
Are you using materialized views or secondary indices?
--
Jeff Jirsa
> On Aug 6, 2018, at 3:49 PM, Laszlo Szabo
> wrote:
>
> Hello All,
>
> I'm having JVM unstable / OOM errors when attempting to auto bootstrap a 9th
> node to an existing 8 node cluster (256 tokens). Each machine has 24
Upgrading to 3.11.3 May fix it (there were some memory recycling bugs fixed
recently), but analyzing the heap will be the best option
If you can print out the heap histogram and stack trace or open a heap dump in
your kit or visualvm or MAT and show us what’s at the top of the reclaimed
objec
>
> Does Cassandra TTL out the hints after max_hint_window_in_ms? From my
> understanding, Cassandra only stops collecting hints after
> max_hint_window_in_ms but can still keep replaying the hints if the node
> comes back again. Is this correct? Is there a way to TTL out hints?
No, but it won't
Hello,
with 2.1, in case a second Cassandra process/instance is started on a host (by
accident), may this result in some sort of corruption, although Cassandra will
exit at some point in time due to not being able to bind TCP ports already in
use?
What we have seen in this scenario is somethin
Thanks a lot Anup! :-)
On Mon, Aug 6, 2018 at 5:45 AM, Anup Shirolkar <
anup.shirol...@instaclustr.com> wrote:
> Hi,
>
> Few of the caveats can be found here:
> https://issues.apache.org/jira/browse/CASSANDRA-7423
>
> The JIRA is implemented in version *3.6* and you are on 3.0,
> So you are aff
Hello,
we are running Cassandra in AWS and On-Premise at customer sites, currently 2.1
in production with 3.11 in loadtest.
In a migration path from 2.1 to 3.11.x, I'm afraid that at some point in time
we end up in incremental repairs being enabled / ran a first time
unintentionally, cause:
a)
13 matches
Mail list logo