On Thu, Apr 24, 2014 at 3:51 PM, Paulo Ricardo Motta Gomes <
paulo.mo...@chaordicsystems.com> wrote:
> If I write with RF=2, CL=ONE: one mutation is accepted, the write returns
> and the other mutation is dropped. Does the coordinator store a hint of the
> dropped replica? Even without running rep
The official docs say that dropped mutations are only fixed by Read Repair
and Anti-entropy (http://wiki.apache.org/cassandra/FAQ#dropped_messages).
However, in this thread (
http://grokbase.com/t/cassandra/user/1235ctdbca/mutation-dropped-messages)
Aaron Morton says that Hinted Handoff also repair
I don't know about hector but the datastax java driver needs just one ip from
the cluster and it will discover the rest of the nodes. Then by default it will
do a round robin when sending requests. So if Hector does the same the patterb
will againg appear.
Did you look at the size of the dirs?
T
I have CPU to spare but reached hdd limit.
Well this might deserve it's own conversation thread but I did reach the
limit of IO after after using a wide row and counter columnfamily ... def looks
like this:
a int , b int, c string , timestamp int, d counter, e counter f counter, g
counter
On Thu, Apr 24, 2014 at 1:54 PM, Batranut Bogdan wrote:
> Is this a setting that will have an impact only on fast cpu + ssd ?
>
It's a setting that will only have impact if you have CPU or IO to spare.
You don't need a fast CPU or SSD to meet those conditions.
=Rob
I did some experiments.
Let's say we have node1 and node2
First, I configured Hector with node1 & node2 as hosts and I saw that only
node1 has high CPU load
To eliminate the "client connection" issue, I re-test with only node2
provided as host for Hector. Same pattern. CPU load is above 50% on
Is this a setting that will have an impact only on fast cpu + ssd ?
On Thursday, April 24, 2014 11:52 PM, Robert Coli wrote:
On Thu, Apr 24, 2014 at 12:50 PM, Batranut Bogdan wrote:
Can someone please explain to me what compaction throtheling does ? I am
reffering to the yaml parameter. I hav
On Thu, Apr 24, 2014 at 12:50 PM, Batranut Bogdan wrote:
> Can someone please explain to me what compaction throtheling does ? I am
> reffering to the yaml parameter. I have changed it from the default 16 to
> 160 but i see no improvement. I have a cluster with hdds. I might be
> missing something
Htop is not the only tool for this . Cassandra will hit io bottlnecks before
cpu (on faster cpus) . A simple solution is to check the size of the data dir
on the boxes. If you have aprox the same size then cassandra is wrinting in the
whole cluster. Check how the data dir size changes when impor
On 04/24/2014 10:29 AM, DuyHai Doan wrote:
Client used = Hector 1.1-4
Default Load Balancing connection policy
Both nodes addresses are provided to Hector so according to its
connection policy, the client should switch alternatively between both nodes
OK, so is only one connection being e
Hello allCan someone please explain to me what compaction throtheling does
? I am reffering to the yaml parameter. I have changed it from the default 16
to 160 but i see no improvement. I have a cluster with hdds. I might be missing
something ...Thankshttps://overview.mail.yahoo.com?.src=iOS";>S
Wow… wheres this been all my life. I don’t see why this can’t be set by
default? https://issues.apache.org/jira/browse/CASSANDRA-7087
---
Chris Lohfink
On Apr 24, 2014, at 11:48 AM, Steven A Robenalt wrote:
> There's a little-known change in the way JMX uses ports that was add to
> JDK7u4 w
Hi all,
I am looking into an issue we ran into last night with a single node in our
three node 2.0.6 cluster. The top level symptoms were timed out writes, and
high latency read and write.
Looking into it more, the node experienced all of these during this two
hour window which it eventually reco
There's a little-known change in the way JMX uses ports that was add to
JDK7u4 which simplifies the use of JMX in a firewalled environment.
The standard RMI registry port for JMX is controlled by the
com.sun.management.jmxremote.port property. The change to Java 7 was to
introduce the related com.
Hey Everyone,
We just had 3 seats open to the Cassandra Developer course in Redwood City
on 4/30/14. If you are interested in attending, please register at the link
below before they sell out:
http://www.datastax.com/what-we-offer/products-services/training
Steph
*From:* Chris Lohfink [ma
The way RMI (which JMX uses, which is what nodetool uses) works is it will
first connect, then send a address/port back over the wire for it to make a new
2nd connection too. This will be potentially to a different address then
127.0.0.1 (can override with -Djava.rmi.server.hostname= in
/etc/c
Hello Michael
RF = 1
Client used = Hector 1.1-4
Default Load Balancing connection policy
Both nodes addresses are provided to Hector so according to its connection
policy, the client should switch alternatively between both nodes
Regards
Duy Hai DOAN
On Thu, Apr 24, 2014 at 4:37 PM, Mic
On 04/24/2014 09:14 AM, DuyHai Doan wrote:
My customer has a cluster with 2 nodes only. I've set virtual nodes so
future addition of new nodes will be easy.
with RF=?
Now, after some benching tests with massive data insert, I can see
with "htop" that one node has its CPU occupation up to
Hello all
I'm facing a rather weird issue with virtual nodes.
My customer has a cluster with 2 nodes only. I've set virtual nodes so
future addition of new nodes will be easy.
Now, after some benching tests with massive data insert, I can see with
"htop" that one node has its CPU occupation u
Wouldn't I want flush_largest_memtables_at larger than my
OccupyingFraction? I want GC to kick in before I have to dump my memtables
not after.
On Wed, Apr 23, 2014 at 10:12 AM, Ruchir Jha wrote:
> Lowering CMSInitiatingOccupancyFraction to less than 0.75 will lead to
> more GC interference an
Leave the RF at 3. Especially since you use write ALL consistency. It's
actually a really bad idea to have your RF set to the same value as the
number of nodes you have. If one of your nodes goes down, your writes will
fail. In fact I would suggest leaving your RF at 3 and set read and write
consi
Hello list,
I have a cluster of 3 nodes with RF=3. The cluster load is daily bulk
write/delete/compact, and read the rest of the time.
For better read performance, and to make sure data is 100% consistent, we
write with "ALL" and read "ONE", stopping the write process if there is a
problem.
My pr
I’ve done an install on an amazon instance, and for some strange reason I can
telnet to the JMX port, but node tool just hangs and doesn’t do anything. I am
hoping I’m overlooking something simple that someone could help me point out?
Thanks (:
cassandra@t1:/cassandra/db$ telnet 127.0.0.1 7199
23 matches
Mail list logo