>
> GC collector is G1. I ever repair the node after scale up. The JVM issue
> reproduced. Can I increase the heap to 40 GB on a 64GB VM?
>
I wouldn't recommend going beyond 31GB on G1. It will be diminishing
returns as I mentioned before.
Do you think the issue is related to materialized view
Thanks a lot for your sharing.
The node is added recently. The bootstrap failed since too many tombstone.
So we enabled the node without bootstrap enabled. Some sstables are not
created in bootstrap. So the missing files might be numerous. I have set
the repair thread number is 1. should I als
It’s worth noting there can be issues with streaming between different
versions of C*. Note this excerpt from
https://thelastpickle.com/blog/2019/02/26/data-center-switch.html
Note that with an upgrade it’s important to keep in mind that *streaming in
a cluster running mixed versions of Casandra i
I don't mean any disrespect but let me offer you a friendly advice -- don't
do it to yourself. I think you would have a very hard time finding someone
who would recommend implementing a solution that involves mixed versions.
If you run into issues, it would be hell trying to unscramble that egg.
O
Is this the first time you've repaired your cluster? Because it sounds like
it isn't coping. First thing you need to make sure of is to *not* run
repairs in parallel. It can overload your cluster -- only kick off a repair
one node at a time on small clusters. For larger clusters, you might be
able
>
> I could re-produce this behavior all the times in 3rd datacenter and there
> is network connectivity issues. Also cluster is not overloaded as this is
> brand new cluster.
I don't quite understand what you mean. What can you re-produce? It would
be good if you could elaborate. Cheers!
Greetings,
We have an existing Cassandra cluster (3.0.9) running on production.
Now ,we want to create data pipelines to ingest data from Cassandra and
persist in hadoop. We are thinking of using CDC feature (available from
Cassandra 3.8) along with Kafka Connect.
We are thinking of creating a n
Hi
As we know data structures like bloom filters, compression metadata, index
summary are kept off heap. But once a table gets compacted, how quickly
that memory is reclaimed by kernel.
Is it instant or it depends when reference if GCed?
Regards
Himanshu
Hello experts
I have a 9 nodes cluster on AWS. Recently, some nodes were down and I want
to repair the cluster after I restarted them. But I found the repair
operation causes lots of memtable flush and then the JVM GC failed.
Consequently, the node hang.
I am using the cassandra 3.1.0.
java vers
I could re-produce this behavior all the times in 3rd datacenter and there
is network connectivity issues. Also cluster is not overloaded as this is
brand new cluster.
On Wednesday, April 15, 2020, Erick Ramirez
wrote:
> *Bootstrap 24359390-4443-11ea-af19-1fbf341b76a0*
>>
>>
> That bootstrap ses
Howdy all, this has been solved, and don't want to waste anyone's time.
Just changed it in /usr/sbin/cassandra, and that fixed that problem.
Thank you for reading, and for everyone's continual contribution to this
community!
Kindest regards,
Daniel
On Wed, Apr 15, 2020 at 7:02 PM Daniel Klevians
Hello everyone,
I'm running Cassandra 2.2.15 on RHEL, and I've come across an interesting
issue when trying to get a certain Elasticsearch plugin to work.
The plugin needs to know where the Cassandra data is stored, and gets this
parameter from the JVM option, -Dcassandra.storagedir
While Cassa
>
> *Bootstrap 24359390-4443-11ea-af19-1fbf341b76a0*
>
>
That bootstrap session ID is from January 31 8:02am Pacific time. I'm
going to speculate that you attempted to bootstrap a node and encountered
issues. The stream likely got orphaned as it seems you have some issues
with your cluster consider
13 matches
Mail list logo