You'd better use version 1.0.9 (using this one in production) or 1.0.10.
1.1 is still a bit young to be ready for prod unfortunately.
--Original Message--
From: Rob Coli
To: user@cassandra.apache.org
To: osish...@gmail.com
ReplyTo: user@cassandra.apache.org
Subject: Re: commitlog_sync_ba
Did you open the inbound port 1024 ~ 65535 at Security Group?
JMX uses two connection channels, one is 7199 by default for accepting
connection request, another one is a random port between 1024 ~65535 decided
during run time. Nodetool runs over JMX.
Patrick.
-Original Message-
From
also nodetool disablegossip to stop other nodes sending requests to the one you
are about to shut down.
> I can shut down my cluster, but I don't want to have the nodes ignore
> it due to some schema misoconfiguration etc when I get it up again.
if you do a rolling restart the *cluster* will not
On Mon, May 28, 2012 at 6:53 AM, osishkin osishkin wrote:
> I'm experimenting with Cassandra 0.7 for some time now.
I feel obligated to recommend that you upgrade to Cassandra 1.1.
Cassandra 0.7 is better than 0.6, but I definitely still wouldn't be
"experimenting" with this old version in 2012.
remove removetoken
-- Original --
From: "Poziombka, Wade L";
Date: 2012??5??30??(??) 5:29
To: "user@cassandra.apache.org";
Subject: nodetool move 0 gets stuck in "moving" state forever
If the node with token 0 dies and we just want it gone from th
If the node with token 0 dies and we just want it gone from the cluster we
would do a nodetool move 0. Then we monitor using nodetool ring it seems to be
stuck on Moving forever.
Any ideas?
Ok now i am confused :),
ok if i have the following
placement_strategy = 'NetworkTopologyStrategy' and strategy_options =
{DC1:R1,DC2:R1,DC3:R1 }
this means in each of my datacenters i will have one full replica that
also can be seed node?
if i have 3 node in addition to the DC replica's with no
8 hours, 1 cup of coffee, and 4 Advil later, and I think I got the
bottom of this. Not having much of a Java or JMX background, I'll try
to explain it the best that I can.
To recap, my machine originally had the IP address of 10.244.207.16.
Then I shutdown/restarted that EC2 instance, and it had
I'm afraid that did not work. I'm running JMX on port 7199 (the
default) and I verified that the port is open and accepting
connections.
Here's what I'm seeing:
dmuth@devteam:~/cliq (production) $ nodetool --host localhost --port 7199 ring
Error connection to remote JMX agent!
java.rmi.ConnectEx
It should retry but it doesn't. It is also clear that it delegates the
retry to the client " *Retry burden pushed out to client* " you can also
check Hector code. I wrote a separate service that retries when this
exception occurs.
I think you have a problem with your load balancer. Try to connect
My webapp connects to the LoadBalancer IP which has the actual nodes in its
pool.
If there is by any chance a connection break then will hector not retry to
re-establish connection I guess it should retry every XX seconds based on
retryDownedHostsDelayInSeconds .
Regards,
Shubham
Since all hosts are seem to be down, Hector will not do retry. There should
be at least one node up in a cluster. Make sure that you have a proper
connection from your webapps to your cluster.
Cem.
On Tue, May 29, 2012 at 1:46 PM, Shubham Srivastava <
shubham.srivast...@makemytrip.com> wrote:
>
Any takers on this. Hitting us badly right now.
Regards,
Shubham
From: Shubham Srivastava
Sent: Tuesday, May 29, 2012 12:55 PM
To: user@cassandra.apache.org
Subject: All host pools Marked Down
I am getting this exception lot of times
me.prettyprint.hector.api.exc
How is it done in Cassandra to be able to range query on a composite key?
"key1" => (A:A:C), (A:B:C), (A:C:C), (A:D:C), (B,A,C)
like get_range ("key1", start_column=(A,"), end_column=(A, C)); will return
[ (A:B:C), (A:C:C) ] (in pycassa)
I mean does the composite implementation add much overhead
Hi,
We were trying to do a similar kind of migration (to a new cluster, no
downtime) in order to remove a legacy OrderedPartitioner limitation. In
the end we were allowed enough downtime to migrate, but originally we were
proposing a similar solution based around deploying an update to the
applic
15 matches
Mail list logo