You could try turning up the thrift_max_message_length_in_mb and thrift_framed_transport_size_in_mb (by default 16 and 15MB respectively) in cassandra.yaml to see if that helped.
On Thu, Feb 17, 2011 at 2:46 PM, <roshandawr...@gmail.com> wrote: > Thanks. I will set the debug mode and see/share if it shows any relevant info. > > The smaller batches of 20 or so column mutations had been working fine. > > After merging, the total # of mutations across all CFs must not be crossing > 60-70. > > The problem is that it is not slow - it seems just hung there. > > --------------------------------------------------- > Sent from BlackBerry > > -----Original Message----- > From: Nate McCall <n...@datastax.com> > Date: Thu, 17 Feb 2011 14:37:53 > To: Roshan Dawrani<roshandawr...@gmail.com> > Cc: <hector-us...@googlegroups.com>; <user@cassandra.apache.org> > Subject: Re: Updating/inserting into multiple column families using one > mutator batch > > log4-server.properties in the conf directory of cassandra (requires a > restart) or via JMX through JConsole or similar on > o.a.c.service.StorageService#setLog4jLevel > > Is there a threshold under which you can successfully insert in batch > mode? Even with something low like 10 entries? > > On Thu, Feb 17, 2011 at 2:29 PM, Roshan Dawrani <roshandawr...@gmail.com> > wrote: >> Hi, >> Thanks for replying. >> I have kept an eye on the Cassandra logs as well as my app server logs, and >> I didn't notice any unusual hector/casandra messages there. >> Where can I configure to see cassandra logs in debug mode? >> I am pretty sure I haven't touched the 500 mutations in a batch yet. What >> could be other possibilities for it hanging? It's happening absolutely >> consistently. >> rgds, >> Roshan >> >> On Fri, Feb 18, 2011 at 1:50 AM, Nate McCall <n...@datastax.com> wrote: >>> >>> It is fine to use multiple coumn families via batch_mutate. The size >>> of the batch itself will take some tunning. In what you are describing >>> below, it will help watch the cassandra logs in debug mode to diagnose >>> the issue. >>> >>> In general though, I think a good rule with batch_mutate is to start >>> with 500 mutations (regardless of column families) and go up >>> incrementally from there watching the logs and ideally memory >>> consumption as you go. >>> >>> On Thu, Feb 17, 2011 at 2:11 PM, Roshan Dawrani <roshandawr...@gmail.com> >>> wrote: >>> > Hi, >>> > Is it ok to update / insert into multiple column families (some regular, >>> > some related super column families) using in one batch? >>> > >>> > I earlier had a few separates mutator.execute() calls hitting these CFs, >>> > but >>> > I am trying to merge them into a bigger batch. >>> > The issue I am facing is that the smaller batches used to get executed >>> > perfectly, but with the combined one, the updates just hang ! >>> > I am not pin-pointing the issue anywhere at this time. I just want to >>> > know >>> > if it is normal to update multiple CFs in a batch and if there is >>> > deadlock >>> > situation that may arise if that is done. >>> > -- >>> > Roshan >>> > Blog: http://roshandawrani.wordpress.com/ >>> > Twitter: @roshandawrani >>> > Skype: roshandawrani >>> > >>> > >> >> >