Hi All.
I have a 2*2 Network-Topology Replication setup, and I run my application
via DataStax-driver.
I frequently get the errors of type ::
*Cassandra timeout during write query at consistency SERIAL (3 replica were
required but only 0 acknowledged the write)*
I have already tried passing a "w
Few weeks ago, we move a ks to another DC but in same cluster.
Original: cluster_1: DC1,ks1+ks2
After: cluster_1: DC1,ks1; DC2,ks2
by reference http://www.planetcassandra.org/blog/cassandra-migration-to-ec2,
our steps :
1.in all new Node(DC2) :
$ vi /usr/install/cassandra/conf/cassandra.yaml:
Thanks all of u.
--
Ranger Tsao
2015-10-30 18:25 GMT+08:00 Anishek Agarwal :
> if its some sort of timeseries DTCS might turn out to be better for
> compaction. also some disk monitoring might help to understand if disk is
> the bottleneck.
>
> On Sun, Oct 25,
Thanks to both Nate and Jeff, for both the bug highlighting and the configure
issues.
We've upgraded to 2.1.11
Lowered our memtable_cleanup_threshold to .11
Lowered out thrift_framed_transport_size_in_mb to 15
We kicked off another run.
The results was that the cassandra failed after 1 hour.
SS
Hi folks,
We are hitting similar issue described in
https://issues.apache.org/jira/browse/CASSANDRA-8072When we try to bootstrap a
node it doesn't bootstrap due to issues encountered in JIRA ticket above.
We are using version of cassandra 2.0.14.
Is there a work-around for the situation? Please n
Serial consistency gets invoked at the protocol level when doing
lightweight transactions such as CAS operations. If you're expecting that
your topology is RF=2, N=2, it seems like some keyspace has RF=3, and so
there aren't enough nodes available to satisfy serial consistency.
See
http://docs.da
Having caught a node in an undesirable state, many of my threads are reading
like this:
"SharedPool-Worker-5" #875 daemon prio=5 os_prio=0 tid=0x7f3e14196800
nid=0x96ce waiting on condition [0x7f3ddb835000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Nativ
@Jeff Jirsa thanks the memtable_* keys were the actual determining factor
for my memtable flushes, they are what I needed to play with.
On Thu, Oct 29, 2015 at 8:23 AM, Ken Hancock
wrote:
> Or if you're doing a high volume of writes, then your flushed file size
> may be completely determined by
Hello!
I have set a cassandra cluster with two nodes, Node A and Node B --> RF=2,
Read CL=1 and Write CL = 1;
Node A is seed...
At first everything is working well, when I add/delete/update entries on Node
A, everything is replicated on Node B and vice-versa, even if I shut down node
A, and I
>
>
> Forgive me, but what is CMS?
>
Sorry - ConcurrentMarkSweep garbage collector.
>
> No. I’ve tried some mitigations since tuning thread pool sizes and GC, but
> the problem begins with only an upgrade of Cassandra. No other system
> packages, kernels, etc.
>
>
>
>From what 2.0 version did yo
I think that this is a normal behaviour as you shut down your seed and then
reboot it. You should know that when you start a seed node it doesn't do
the bootstrapping thing. Which means it doesn't look if there are changes
in the contents of the tables. In here in your tests, you shut down node A
b
Thanks for your answer!
I thought that bootstrapping is executed only when you add a node to the
cluster the first time after that I thought tgat gossip is the method used to
discover the cluster members againIn my case I thought that it was more
about a read repair issue.., am I wrong?
D
Hello,
After restarting the cassandra, all of my keyspaces got missing. I can only
see system_traces, system, dse_system. I didn't make any changes to
cassandra.yaml.
But, I can see all the keyspaces data in my */data/ *directory. Is there
anyway to access those lost keyspaces through cqlsh?
Can
Following up on this older question: as per the docs, one *should* still do
full repair periodically (the docs say weekly), right? And run incremental
more often to fill in?
On Mon, Nov 2, 2015 at 3:02 PM, Maciek Sakrejda wrote:
> Following up on this older question: as per the docs, one *should* still
> do full repair periodically (the docs say weekly), right? And run
> incremental more often to fill in?
>
Something that amounts to full repair once every gc_grace_s
On Mon, Nov 2, 2015 at 1:37 PM, Arun Sandu wrote:
> After restarting the cassandra, all of my keyspaces got missing. I can
> only see system_traces, system, dse_system. I didn't make any changes to
> cassandra.yaml.
>
> But, I can see all the keyspaces data in my */data/ *directory. Is there
> an
> On Nov 2, 2015, at 11:35 AM, Nate McCall wrote:
> Forgive me, but what is CMS?
>
> Sorry - ConcurrentMarkSweep garbage collector.
Ah, my brain was trying to think in terms of something Cassandra specific. I
have full GC logging on and since moving to G1, I haven’t had any >500ms GC
cycles
Hi Eric,
I am sorry, but I don't understand.
If there had been some issue in the configuration, then the
consistency-issue would be seen everytime (I guess).
As of now, the error is seen sometimes (probably 30% of times).
On Mon, Nov 2, 2015 at 10:24 PM, Eric Stevens wrote:
> Serial consistenc
18 matches
Mail list logo