Re: Cassandra snapshot restore with VNODES missing some data

2017-08-31 Thread kurt greaves
What Erick said. That error in particular implies you aren't setting all 256 tokens in initial_token

Re: Cassandra snapshot restore with VNODES missing some data

2017-08-31 Thread Lutaya Shafiq Holmes
SOME ONE HELP ME GET STARTED WITH CASSANDRA IN WINDOWS https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon"; target="_blank">https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repe

Re: Cassandra snapshot restore with VNODES missing some data

2017-08-31 Thread Oleksandr Shulgin
On Thu, Aug 31, 2017 at 10:14 AM, Lutaya Shafiq Holmes < lutayasha...@gmail.com> wrote: > SOME ONE HELP ME GET STARTED WITH CASSANDRA IN WINDOWS Given your user profile picture I've freaked out for a second thinking it's the 45th president of US is shouting at us to get started with Cassandra...

Cassandra - Nodes can't restart due to java.lang.OutOfMemoryError: Direct buffer memory

2017-08-31 Thread qf zhou
I am running a cluster with 3 nodes. Then it occurs the following errors. After I restart one node, the error happens again. I don’t know why. Who Can help me ? Thank you !! ERROR [ReadStage-31] 2017-08-31 15:08:20,878 JVMStabilityInspector.java:141 - JVM state determined to be unstable. Ex

RE: Cassandra - Nodes can't restart due to java.lang.OutOfMemoryError: Direct buffer memory

2017-08-31 Thread Jonathan Baynes
Can you tell us what version you are on? -Original Message- From: qf zhou [mailto:zhouqf2...@gmail.com] Sent: 31 August 2017 10:52 To: user@cassandra.apache.org Subject: Cassandra - Nodes can't restart due to java.lang.OutOfMemoryError: Direct buffer memory I am running a cluster with 3

Re: Cassandra - Nodes can't restart due to java.lang.OutOfMemoryError: Direct buffer memory

2017-08-31 Thread qf zhou
I am usingCassandra 3.9 with cqlsh 5.0.1. > 在 2017年8月31日,下午5:54,Jonathan Baynes 写道: > > again

RE: Cassandra - Nodes can't restart due to java.lang.OutOfMemoryError: Direct buffer memory

2017-08-31 Thread Jonathan Baynes
I wonder if its related to this bug (below) that’s currently unresolved, albeit it being reproduced way back in 2.1.11 https://issues.apache.org/jira/browse/CASSANDRA-10689 From: qf zhou [mailto:zhouqf2...@gmail.com] Sent: 31 August 2017 10:58 To: user@cassandra.apache.org Subject: Re: Cassand

Re: Cassandra - Nodes can't restart due to java.lang.OutOfMemoryError: Direct buffer memory

2017-08-31 Thread Chris Lohfink
What version of java are you running? There is a "kinda leak" in jvm around this you may run into, can try with -Djdk.nio.maxCachedBufferSize=262144 if above 8u102. You can also try increasing the size allowed for direct byte buffers. It defaults to size of heap -XX:MaxDirectMemorySize=?G Some NIO

Re: Cassandra snapshot restore with VNODES missing some data

2017-08-31 Thread Jai Bheemsen Rao Dhanwada
I double checked that I am setting all 256 tokens, verified manually. When I start Cassandra with empty data directory is startsup fine. now if I restart the Cassandra without making any changes it won't start and give same error I captured the nodetool status and nodetool ring and compare the to

Question about nodetool repair

2017-08-31 Thread Harper, Paul
Hello All, I have a 6 node ring with 3 nodes in DC1 and 3 nodes in DC2. I ssh into node5 on DC2 was in a “DN” state. I ran “nodetool repair”. I’ve had this situation before and ran “nodetool repair -dc DC2”. I’m trying what if anything is different between those commands. What are they actuall

Re: Question about nodetool repair

2017-08-31 Thread Blake Eggleston
Specifying a dc will only repair the data in that dc. If you leave out the dc flag, it will repair data in both dcs. You probably shouldn't be restricting repair to one dc without a good rationale for doing so. On August 31, 2017 at 8:56:24 AM, Harper, Paul (paul.har...@aspect.com) wrote: Hello

RE: Invalid Gossip generation

2017-08-31 Thread Mark Furlong
What do you recommend on taking this node out of the cluster, a decommission or a removenode? Since the communication between nodes is getting invalid gossip generation messages I would think a decommission might not be effective. Thanks Mark 801-705-7115 office From: Erick Ramirez [mailto:flig

Re: nodetool gossipinfo question

2017-08-31 Thread Blake Eggleston
That's the value version. Gossip uses versioned values to work out which piece of data is the most recent. Each node has it's own highest version, so I don't think it's unusual for that to be different for different nodes. When you say the node crashes, do you mean the process dies? On August 2

Re: Cassandra 3.11 is compacting forever

2017-08-31 Thread Igor Leão
Hey Kurt, Thanks for your reply. Soon as the whole cluster was upgraded (using existing nodes) it worked pretty well. After a while, the high cpu usage/ pending compactions was back affecting all cluster. It's still an open problem. 2017-08-21 20:24 GMT-03:00 kurt greaves : > Why are you adding

Re: nodetool repair failure

2017-08-31 Thread Fay Hou [Storage Service] ­
What is your GC_GRACE_SECONDS ? What kind repair option do you use for nodetool repair on a keyspace ? Did you start the repair on one node? did you use nodetool repair -pr ? or just "nodetool repair keyspace" ? How many nodetool repair processes do you use on the nodes? On Sun, Jul 30, 2017 a

Re: Cassandra - Nodes can't restart due to java.lang.OutOfMemoryError: Direct buffer memory

2017-08-31 Thread John Sanda
I am not sure which version of Netty is in 3.9, but maybe you are hitting https://issues.apache.org/jira/browse/CASSANDRA-13114. I hit this in Cassandra 3.0.9 which uses Netty 4.0.23. Here is the upstream netty ticket https://github.com/netty/netty/issues/3057. On Thu, Aug 31, 2017 at 10:15 AM, Ch

Cassandra CF Level Metrics (Read, Write Count and Latency)

2017-08-31 Thread Jai Bheemsen Rao Dhanwada
Hello All, I am looking to capture the CF level Read, Write count and Latency. As of now I am using Telegraf plugin to capture the JMX metrics. What is the MBeans, scope and metric to look for the CF level metrics?

Re: Cassandra CF Level Metrics (Read, Write Count and Latency)

2017-08-31 Thread Christophe Schmitz
Hello Jai, Did you have a look at the following page: http://cassandra.apache.org/doc/latest/operating/metrics.html In your case, you would want the following MBeans: org.apache.cassandra.metrics:type=Table keyspace= scope= name= With MetricName set to ReadLatency and WriteLatency Cheers, Chris

Re: Cassandra CF Level Metrics (Read, Write Count and Latency)

2017-08-31 Thread Jai Bheemsen Rao Dhanwada
I did look at the document and tried setting up the metric as following, does this is not matching with the total read requests. I am using "ReadLatency_OneMinuteRate" /org.apache.cassandra.metrics:type=ColumnFamily,keyspace=*,scope=*,name=ReadLatency On Thu, Aug 31, 2017 at 4:17 PM, Christophe S

Re: Cassandra CF Level Metrics (Read, Write Count and Latency)

2017-08-31 Thread Christophe Schmitz
Hi Jai, The ReadLatency MBean expose a few metrics, including the count one, which is the total read requests you are after. See attached screenshot Cheers, Christophe On 1 September 2017 at 09:21, Jai Bheemsen Rao Dhanwada < jaibheem...@gmail.com> wrote: > I did look at the document and tried

Re: Cassandra CF Level Metrics (Read, Write Count and Latency)

2017-08-31 Thread Jai Bheemsen Rao Dhanwada
okay, let me try it out On Thu, Aug 31, 2017 at 8:30 PM, Christophe Schmitz < christo...@instaclustr.com> wrote: > Hi Jai, > > The ReadLatency MBean expose a few metrics, including the count one, which > is the total read requests you are after. > See attached screenshot > > Cheers, > > Christoph

old big tombstone data file occupy much disk space

2017-08-31 Thread qf zhou
I am using a cluster with 3 nodes and the cassandra version is 3.0.9. I have used it about 6 months. Now each node has about 1.5T data in the disk. I found some sstables file are over 300G. Using the sstablemetadata command, I found it: Estimated droppable tombstones: 0.9622972799707109.

Re: old big tombstone data file occupy much disk space

2017-08-31 Thread Jeff Jirsa
User defined compaction to do a single sstable compaction on just that sstable It's a nodetool command in very recent versions, or a jmx method in older versions -- Jeff Jirsa > On Aug 31, 2017, at 11:04 PM, qf zhou wrote: > > I am using a cluster with 3 nodes and the cassandra version

Re: old big tombstone data file occupy much disk space

2017-08-31 Thread qf zhou
dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-073e51a0cdb811e68dce511be6a305f6/mc-100963-big-Data.db echo "run -b org.apache.cassandra.db:type=CompactionManager forceUserDefinedCompaction $dataPath" | java -jar /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar -l localhost:7199 In th