Hi,
I have a test cluster of 4 nodes running Debian and Cassandra 0.8.7,
there are 3 keyspaces, all with RF=3, a node has load around 40GB.
When I run "nodetool repair" after a while all thrift clients that
read with CL.QUORUM get TimeoutException and even some that use just
CL.ONE. I've tried to
Even if your query contains multiple columns which have secondary index on
each, current cassandra uses only one of them as a hash lookup. Other
columns are for filtering out from matched results. If a part of your
secondary index query has a lot of matches in data, cassandra has to
iterate over ma
The examples are old, except (at least) there is an up to date version
for java (look for 1.0.0 version)
On 11/23/2011 08:24 PM, Aaron Raddon wrote:
Hello, i am trying to read/write to cassandra using go and receiving
this error "columnfamily alone is required for standard CF" back from
cassan
Hello, i am trying to read/write to cassandra using go and receiving
this error "columnfamily alone is required for standard CF" back from
cassandra server (1.03 on ubuntu). I generated the go files using
this thrift
http://svn.apache.org/repos/asf/cassandra/tags/cassandra-1.0.3/interface/cassan
This was discussed a long time ago, but I need to know what's the state
of the art answer to that:
assume one of my few nodes is very dead. I have no resources or time to
fix it. Data is replicated
so the data is still available in the cluster. How do I completely
remove the dead node without ha
jconsole is going to be the most up to date documentation for the JMX
interface =(.
-Jeremiah
On 11/23/2011 10:49 AM, David McNelis wrote:
Ok. in that case I think the Docs are wrong.
http://wiki.apache.org/cassandra/JmxInterface has StorageService as
part of org.apache.cassandra.service.
Yes I'm using indexslicequeries, but both of the request parameters
have a secondary index on them. So it's not quite similar to your
problem.
Am 23.11.2011 13:55, schrieb Roland Gude:
Are you using indexslicequeries?
Wiki documentation will probably never be able to keep pace with the actual
JMX objects. The tend to get renamed, move, properties get added removed,
etc. It happens with active projects.
Edward
On Wed, Nov 23, 2011 at 12:22 PM, David McNelis
wrote:
> In that case, I think that the documentation
In that case, I think that the documentation is incorrect, as it has
Service listed as the package related to the StorageService.
I apologize for the lack of the rest of the thread, everything is getting
bounced when I try to send it for some reason.
--
*David McNelis*
Lead Software Engineer
Age
That should do the trick.
2011/11/23 Michael Vaknine :
> Hi Jonathan,
>
> You are right I had 1 node 1.0.2 for some reason so I did the upgrade again.
> I have now a 4 cluster upgraded to 1.0.3 but now I get the following error
> on 2 nodes on the cluster:
>
> ERROR [HintedHandoff:3] 2011-11-23 06
Ok. in that case I think the Docs are wrong.
http://wiki.apache.org/cassandra/JmxInterface has StorageService as part of
org.apache.cassandra.service.
Also, once I executed a CLI command, I started getting the expected output
(output being that it was able to return the live nodes).
--
*David
Oh I was thinking of StorageProxy. StorageService should exist you
just have the path wrong. It should be
"org.apache.cassandra.db:type=StorageService".(you had
org.apache.cassandra.service) JMX should be lightweight enough for
this.
On Wed, Nov 23, 2011 at 9:06 AM, David McNelis
wrote:
> But if
But if the StorageService bean is only created once a transaction has
occurred, is there another location, i.e. the CommitLog, that I could check
just to see if the node is 'live'. Or do you think I'd be better served
trying to execute something on the node (i.e. read a record using Hector)?
Idea
Are you using indexslicequeries?
I described a similar problem a couple of months ago (and mechanisms to
reproduce the behavior) but unfortunately failed to create an issue for it
(shame on me).
The mail thread is in the archives
http://www.mail-archive.com/user@cassandra.apache.org/msg16157.htm
14 matches
Mail list logo