Only do rolling restart can't solve problem. But I thought I found a way.
There are two same name folder with different suffix in data dictionary. e.g.
dayu_123 and dayu_234, the dayu_123 folder is empty and dayu_234 folder is not.
then I use cql to query system_schema.tables,the id of table nam
hello,
concerning to slow queries
https://issues.apache.org/jira/browse/CASSANDRA-12403
how do I activate the logging debug for the slow queries ?
I tried with nodetool setlogging org.apache.cassandra.db.monitoring DEBUG
and also with log4j-server.properties and I did not get the expected result
Hello Jean Carlo,
To activate Debug mode, you should edit "logback.xml " not
"log4j-server.properties"
Ahmed.
Thank you ahmed!
Saludos
Jean Carlo
"The best way to predict the future is to invent it" Alan Kay
On Thu, Jun 28, 2018 at 11:21 AM, Ahmed Eljami
wrote:
> Hello Jean Carlo,
>
> To activate Debug mode, you should edit "logback.xml " not
> "log4j-server.properties"
>
>
> Ahmed.
>
I have a 6-node cluster I'm migrating to the new i3 types.
But at the same time I want to migrate to a different AZ.
What happens if I do the "running node replace method" with 1 node at a
time moving to the new AZ. Meaning, I'll have temporarily;
5 nodes in AZ 1c
1 new node in AZ 1e.
I'll wash-
The single node in 1e will be a replica for every range (and you won’t be able
to tolerate an outage in 1c), potentially putting it under significant load
--
Jeff Jirsa
> On Jun 28, 2018, at 7:02 AM, Randy Lynn wrote:
>
> I have a 6-node cluster I'm migrating to the new i3 types.
> But at th
You don’t have to use EC2 snitch on AWS but if you have already started with it
, it may put a node in a different DC.
If your data density won’t be ridiculous You could add 3 to different DC/
Region and then sync up. After the new DC is operational you can remove one at
a time on the old DC an
Already running with Ec2.
My original thought was a new DC parallel to the current, and then
decommission the other DC.
Also my data load is small right now.. I know small is relative term.. each
node is carrying about 6GB..
So given the data size, would you go with parallel DC or let the new AZ
Parallel load is the best approach and then switch your Data access code to
only access the new hardware. After you verify that there are no local read /
writes on the OLD dc and that the updates are only via Gossip, then go ahead
and change the replication factor on the key space to have zero r
You can also enable traceprobability: /opt/cassandra/bin/nodetool
settraceprobability 1
It will populate system_traces keyspace where you can see details on queries
On Thu, Jun 28, 2018 at 5:32 AM, Jean Carlo
wrote:
> Thank you ahmed!
>
>
> Saludos
>
> Jean Carlo
>
> "The best way to predict the
Hi,
Please, how can check the health of my cluster / data center using
cassandra ?
In fact i'd like to generate a hitory of the state of each node. an history
about the failure of my cluster ( 20% of failure in a day, 40% of failure
in a day etc...)
Thank you so much.
Kind regards.
I have datadog monitoring JVM heap.
Running 3.11.1.
20GB heap
G1 for GC.. all the G1GC settings are out-of-the-box
Does this look normal?
https://drive.google.com/file/d/1hLMbG53DWv5zNKSY88BmI3Wd0ic_KQ07/view?usp=sharing
I'm a C# .NET guy, so I have no idea if this is normal Java behavior.
-
Hi All,
I am trying to backup Cassandra DB, but by default it is saving the snapshots
in the default location.
Is there any way we can specific the location where we want to store the
snapshots.
Regards
Sanjeev
No - they'll hardlink into the snapshot folder on each data directory. They
are true hardlinks, so even if you could move it, it'd still be on the same
filesystem.
Typical behavior is to issue a snapshot, and then copy the data out as
needed (using something like https://github.com/JeremyGrosser/t
It depends a bit on which collector you're using, but fairly normal. Heap
grows for a while, then the JVM decides via a variety of metrics that it's
time to run a collection. G1GC is usually a bit steadier and less sawtooth
than the Parallel Mark Sweep , but if your heap's a lot bigger than neede
Thanks for the feedback..
Getting tons of OOM lately..
You mentioned overprovisioned heap size... well...
tried 8GB = OOM
tried 12GB = OOM
tried 20GB w/ G1 = OOM (and long GC pauses usually over 2 secs)
tried 20GB w/ CMS = running
we're java 8 update 151.
3.11.1.
We've got one table that's got
Hi all
We use prometheus to monitor cassandra and then put it on graphana for
dashboard.
Whats the parameter to m3asure throughput of cassandra?
Odd. Your "post-GC" heap level seems a lot lower than your max, which
implies that you should be OK with ~10GB. I'm guessing either you're
genuinely getting a huge surge in needed heap and running out, or it's
falling behind and garbage is building up. If the latter, there might be
some tweaking
There is a need for a repair with both DCs as rebuild will not stream all
replicas, so unless you can guarantee you were perfectly consistent at time
of rebuild you'll want to do a repair after rebuild.
On another note you could just replace the nodes but use GPFS instead of
EC2 snitch, using the
So we have two data centers already running..
AP-SYDNEY, and US-EAST.. I'm using Ec2Snitch over a site-to-site tunnel..
I'm wanting to move the current US-EAST from AZ 1a to 1e..
I know all docs say use ec2multiregion for multi-DC.
I like the GPFS idea. would that work with the multi-DC too?
What
Abdul - I use DataDog
I track the "latency one minute rate" for both read/writes.
But I'm interested to see what others say and if I got that right?
On Thu, Jun 28, 2018 at 6:19 PM, Abdul Patel wrote:
> Hi all
>
> We use prometheus to monitor cassandra and then put it on graphana for
> dashboard
When you run TPstats or Tablestats subcommands in nodetool you are actually
accessing data inside Cassandra via JMX.
You can start there at first.
Rahul
On Jun 28, 2018, 10:55 AM -0500, Thouraya TH , wrote:
> Hi,
>
> Please, how can check the health of my cluster / data center using cassandra
Just curious -
>From which instance type are you migrating to i3 type and what are the
reasons to move to i3 type ?
Are you going to take benefit from NVMe instance storage - if yes, how ?
Since we are also migrating our cluster on AWS - but we are currently using
r4 instance, so i was intereste
23 matches
Mail list logo