rpctimeout should be sufficient

you can turn on debug logging to see how long it's actually taking the
destination node to do the write (or look at cfstats, if no other
writes are going on)

On Fri, May 14, 2010 at 11:55 AM, Sonny Heer <sonnyh...@gmail.com> wrote:
> Hey,
>
> I'm running a map/reduce job, reading from HDFS directory, and
> reducing to Cassandra using the batch_mutate method.
>
> The reducer builds the list of rowmutations for a single row, and
> calls batch_mutate at the end.  As I move to a larger dataset, i'm
> seeing the following exception:
>
> Caused by: TimedOutException()
>        at 
> org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:15361)
>        at 
> org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:796)
>        at 
> org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:772)
>
> I changed the RpcTimeoutInMillis to 60 seconds with no changes.  What
> configuration changes should i make when doing intensive write
> operations using batch mutate?
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Reply via email to