Re: Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
Gotcha. No i don't see anything particularly interesting in the log. Do i need to turn on higher logging in log4j? here it is after i killed the client: INFO [main] 2010-04-21 14:25:52,166 DatabaseDescriptor.java (line 229) Auto DiskAccessMode determined to be standard INFO [main] 2010-04-2

Re: Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
what i mean by as data is processed is that the column size will grow in cassandra, but my client isn't ever writing large column size under a given row... Any idea whats going on here? On Wed, Apr 21, 2010 at 3:05 PM, Sonny Heer wrote: > What does OOM stand for? > > for a given insert the size

Re: Import using cassandra 0.6.1

2010-04-21 Thread Jonathan Ellis
On Wed, Apr 21, 2010 at 5:05 PM, Sonny Heer wrote: > What does OOM stand for? out of memory > for a given insert the size is small (meaning the a single insert > operation only has about a sentence of data)  although as the insert > process continues, the columns under a given row key could pote

Re: Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
What does OOM stand for? for a given insert the size is small (meaning the a single insert operation only has about a sentence of data) although as the insert process continues, the columns under a given row key could potentially grow to be large. Is that what you mean? An operation entails: Re

Re: Import using cassandra 0.6.1

2010-04-21 Thread Jonathan Ellis
then that's not the problem. are you writing large rows that OOM during compaction? On Wed, Apr 21, 2010 at 4:34 PM, Sonny Heer wrote: > They are showing up as completed?  Is this correct: > > > Pool Name                    Active   Pending      Completed > STREAM-STAGE                      0  

Re: Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
They are showing up as completed? Is this correct: Pool NameActive Pending Completed STREAM-STAGE 0 0 0 RESPONSE-STAGE0 0 0 ROW-READ-STAGE0 0 517446 L

Re: Import using cassandra 0.6.1

2010-04-21 Thread Jonathan Ellis
you need to figure out where the memory is going. check tpstats, if the pending ops are large somewhere that means you're just generating insert ops faster than it can handle. On Wed, Apr 21, 2010 at 4:07 PM, Sonny Heer wrote: > note: I'm using the Thrift API to insert.  The commitLog directory

Re: Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
note: I'm using the Thrift API to insert. The commitLog directory continues to grow. The heap size continues to grow as well. I decreased MemtableSizeInMB size, but noticed no changes. Any idea what is causing this, and/or what property i need to tweek to alleviate this? What is the "insert th

Re: Import using cassandra 0.6.1

2010-04-21 Thread Jonathan Ellis
http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts On Wed, Apr 21, 2010 at 12:02 PM, Sonny Heer wrote: > Currently running on a single node with intensive write operations. > > > After running for a while... > > Client starts outputting: > > TimedOutException() >        at > org

Import using cassandra 0.6.1

2010-04-21 Thread Sonny Heer
Currently running on a single node with intensive write operations. After running for a while... Client starts outputting: TimedOutException() at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:12232) at org.apache.cassandra.thrift.Cassandra$Client.recv