Ah ok. No that was not the case.
The client which did the long running scan didn't wait for the slowest node.
Only other clients that asked the slow node directly were affected.
Sorry about the confusion.
On 04.12.10 05:44, Jonathan Ellis wrote:
That makes sense, but this shouldn't make reque
Ah, got it. Thanks for clearing that up!
On Sat, Dec 4, 2010 at 11:56 AM, Daniel Doubleday
wrote:
> Ah ok. No that was not the case.
>
> The client which did the long running scan didn't wait for the slowest node.
> Only other clients that asked the slow node directly were affected.
>
> Sorry ab
Hi All,
I'm currently not happy with the hardware and the operating system of our
4-node cassandra cluster. I'm planning to move the cluster to a different
hardware/OS architecture.
For this purpose I'm planning to bring up 4 new nodes, so that each node
will be a replacement of another node in
One of my Cassandra nodes is giving me a number of errors then effectively
dying. I think it was somehow caused by interrupting a nodetool clean
operation. Running a recent 0.7 build out of svn.
ERROR [MutationStage:26] 2010-12-04 16:23:04,395 RowMutationVerbHandler.java
(line 83) Error in row mut
To be clear, I had to interrupt a clean operation earlier in the day be
killing the cassandra process. Now the node works for awhile,
but continually logging the "Error in row mutation" errors then eventually
logs a "Fatal exception in thread" error. After which, the process stays
alive but there s
Are you mixing different Cassandra versions?
On Sat, Dec 4, 2010 at 4:58 PM, Dan Hendry wrote:
> To be clear, I had to interrupt a clean operation earlier in the day be
> killing the cassandra process. Now the node works for awhile,
> but continually logging the "Error in row mutation" errors the
No, all nodes are running very recent (< 2 day old) code out of the 0.7
branch. This cluster has always had 0.7 RC1(+) code running on it
On Sat, Dec 4, 2010 at 6:24 PM, Jonathan Ellis wrote:
> Are you mixing different Cassandra versions?
>
> On Sat, Dec 4, 2010 at 4:58 PM, Dan Hendry
> wrote:
I created to explore doing that - it would seem like a reasonable thing to do
with a batch/analytic/MR operation. You might chime in to explain your use
case on the ticket.
https://issues.apache.org/jira/browse/CASSANDRA-1821
On Dec 3, 2010, at 2:33 PM, Sanjay Acharya wrote:
> We are in the p
Here are two other errors which appear frequently:
ERROR [MutationStage:29] 2010-12-04 17:47:46,931 RowMutationVerbHandler.java
(line 83) Error in row mutation
java.io.IOException: Invalid localDeleteTime read: 0
at
org.apache.cassandra.db.SuperColumnSerializer.deserialize(SuperColumn.java
Hi,
I am still new with cassandra and from what I know so far cassandra is based
on Google BigTables model. And one thing that we can do with BigTable is
query data using GQL. I tried looking for information about query language
that is built on top of cassandra and ends with no luck. The only way
Ah, sounds like you could change your data model.
Perhaps using a Standard CF with and 0.7 secondary indexes would suit you.
Or if you code knows the value for both attributes, just use these as a key and
get all the data for the row. One simple lookup. It's ok to denormalise your
data to supp
At least one of your nodes is sending garbage to the others.
Either there's a bug in the bleeding edge code you are running (did
you try rc1?) or you do have nodes on different versions or you have a
hardware problem.
On Sat, Dec 4, 2010 at 5:51 PM, Dan Hendry wrote:
> Here are two other errors
It doesn't make sense at the RecordReader layer to consume multiple
CFs. Chaining them together is usually best left to a higher level
like Pig, although you could do it manually if you wanted to badly
enough.
On Fri, Dec 3, 2010 at 2:33 PM, Sanjay Acharya wrote:
> We are in the process of evalu
13 matches
Mail list logo