Yeah there are many writes happening at the same time to any given cass node.

e.g. assume 10 machines, all running hadoop and cassandra.  The hadoop
nodes are randomly picking a cassandra node and writing directly using
the batch mutate.

After increasing the timeout even more, i don't get that exception
anymore.  But now getting UnavailableException.

The wiki states this happens when all the replicas required could be
created and/or read.  How do we resolve this problem?  the write
consistency is one.

thanks


On Sat, May 15, 2010 at 8:02 AM, Jonathan Ellis <jbel...@gmail.com> wrote:
> rpctimeout should be sufficient
>
> you can turn on debug logging to see how long it's actually taking the
> destination node to do the write (or look at cfstats, if no other
> writes are going on)
>
> On Fri, May 14, 2010 at 11:55 AM, Sonny Heer <sonnyh...@gmail.com> wrote:
>> Hey,
>>
>> I'm running a map/reduce job, reading from HDFS directory, and
>> reducing to Cassandra using the batch_mutate method.
>>
>> The reducer builds the list of rowmutations for a single row, and
>> calls batch_mutate at the end.  As I move to a larger dataset, i'm
>> seeing the following exception:
>>
>> Caused by: TimedOutException()
>>        at 
>> org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:15361)
>>        at 
>> org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:796)
>>        at 
>> org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:772)
>>
>> I changed the RpcTimeoutInMillis to 60 seconds with no changes.  What
>> configuration changes should i make when doing intensive write
>> operations using batch mutate?
>>
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of Riptano, the source for professional Cassandra support
> http://riptano.com
>

Reply via email to