Thank you so much Andrew. I will start reading it.
On Thu, 6 Sep 2018, 10:26 Andrew Baker, wrote:
> Hi Shyam,
>
> Those are big questions! The book *Cassandra: The Definitive Guide *is
> a good place to start, it will walk you through a little bit of each of
> those questions. It should be a
Hello Thomas.
Be aware that this behavior happens when the compaction throughput is set
to *0 *(unthrottled/unlimited). I believe the estimate uses the speed limit
for calculation (which is often very much wrong anyway).
I just meant to say, you might want to make sure that it's due to cleanup
ty
IMHO, Cassandra write is more of a CPU bound task, so while determining cluster
write throughput, what CPU usage percent (avg among all cluster nodes) should
be determined as limit? Rephrase: what's the normal CPU usage in Cassandra
cluster (while no compaction, streaming or heavy-read running
On Thu, Sep 6, 2018 at 11:50 AM Alain RODRIGUEZ wrote:
>
> Be aware that this behavior happens when the compaction throughput is set
> to *0 *(unthrottled/unlimited). I believe the estimate uses the speed
> limit for calculation (which is often very much wrong anyway).
>
As far as I can remember
>
> As far as I can remember, if you have unthrottled compaction, then the
> message is different: it says "n/a".
Ah right!
I am now completely convinced this needs a JIRA as well (indeed, if it's
not fixed in C*3+, as Jeff mentioned).
Thanks for the feedback Alex.
Le jeu. 6 sept. 2018 à 11:06,
Alain,
compaction throughput is set to 32.
Regards,
Thomas
From: Alain RODRIGUEZ
Sent: Donnerstag, 06. September 2018 11:50
To: user cassandra.apache.org
Subject: Re: nodetool cleanup - compaction remaining time
Hello Thomas.
Be aware that this behavior happens when the compaction throughput
What i have done is:
1) added more disks, so the compaction will carry on
2) when i've switched to LCS from STCS the STCS queues for the processing
big sstables have remained, so i've stopped these queues with nodetool stop
-id queue_id
and LCS compaction has started to process sstables , i'm u
Thank you Jeff.
While migration , how can test/validate against Cassandra particularly i am
going for "parallel run". Any sample strategy?
Regards,
Shyam
On Thu, 6 Sep 2018, 09:48 Jeff Jirsa, wrote:
> It very much depends on your application. You'll PROBABLY want to double
> write for some per
We are receiving following error
9140- at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
[apache-cassandra-3.0.10.jar:3.0.10]
9141- at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
9142:WARN [SharedPool-Worker-1] 2018-09-06 14:29:46,071
AbstractLocalAwareExecutor
Here is the stacktrace from the failure, it looks like it's trying to
gather all the columfamily metrics and going OOM. Is this just for the JMX
metrics?
https://github.com/apache/cassandra/blob/cassandra-2.1.16/src/java/org/apache/cassandra/metrics/ColumnFamilyMetrics.java
ERROR [MessagingServic
Remove my email please
De: Vitali Dyachuk [mailto:vdjat...@gmail.com]
Enviada em: quinta-feira, 6 de setembro de 2018 08:00
Para: user@cassandra.apache.org
Assunto: Re: Large sstables
What i have done is:
1) added more disks, so the compaction will carry on
2) when i've switched to LCS fro
Hi all,
We are fairly new to Cassandra. We began looking into the CDC feature
introduced in 3.0. As we spent more time looking into it, the complexity
began to add up (i.e. duplicated mutation based on RF, out of order
mutation, mutation does not contain full row of data, etc). These
limitations h
Hello Folks,
Does any body refer good documentation on Cassandra stress test.
I have below questions.
1) Which server is good to start the test, Cassandra server or Application
server.
2) I am using Datastax Java driver, is any good documentation for stress test
specific to this driver.
3) Ho
Hi,3 node cluster, Cassandra 3.9, GossipingPropertyFileSnitch, one DC
I removed dead node with `nodetool assassinate`. It was also seed node, so I
removed it from seeds list on two other nodes and restarted them.
But I still see in log
`DEBUG [GossipTasks:1] 2018-09-06 18:32:05,149 Gossiper.java
On 09/06/2018 01:48 PM, Vlad wrote:
> Hi,
> 3 node cluster, Cassandra 3.9, GossipingPropertyFileSnitch, one DC
>
> I removed dead node with `nodetool assassinate`. It was also seed node,
> so I removed it from seeds list on two other nodes and restarted them.
>
> But I still see in log
> `DEBUG [
Hi,
this node isn't in system.peers on both nodes.
On Wednesday, August 29, 2018 4:22 PM, Vlad
wrote:
Hi,
>You'll need to disable the native transportWell, this is what I did already,
>it seems repair is running
I'm not sure whether repair will finish within 3 hours, but I can run it
Hi, this node isn't in system.peers on both nodes.
On Thursday, September 6, 2018 10:02 PM, Michael Shuler
wrote:
On 09/06/2018 01:48 PM, Vlad wrote:
> Hi,
> 3 node cluster, Cassandra 3.9, GossipingPropertyFileSnitch, one DC
>
> I removed dead node with `nodetool assassinate`. It was a
Hi,
We are testing Cassandra 3.11.2 and we sawed that it contains a critcal
bug wich was fixed in 3.11.3 (
https://issues.apache.org/jira/browse/CASSANDRA-13929).
After about 1 months of testing, we haven't encountered this bug in our
environnement, but to be sure before going in production, we
It's interesting and a bit surprising that 256 write threads isn't enough.
Even with a lot of cores, I'd expect you to be able to saturate CPU with
that many threads. I'd make sure you don't have other bottlenecks, like
GC, IOPs, network, or "microbursts" where your load is actually fluctuating
be
19 matches
Mail list logo