Today I retry the 2GB heap now it's working. No that out of memory error.
Looks like I have to restart Cassandra several times before the new changes
take effect. 

-----Original Message-----
From: Benjamin Black [mailto:b...@b3k.us] 
Sent: Monday, June 14, 2010 7:46 PM
To: user@cassandra.apache.org
Subject: Re: java.lang.OutofMemoryerror: Java heap space

My guess: you are outrunning your disk I/O.  Each of those 5MB rows
gets written to the commitlog, and the memtable is flushed when it
hits the configured limit, which you've probably left at 128MB.  Every
25 rows or so you are getting memtable flushed to disk.  Until these
things complete, they are in RAM.

If this is actually representative of your production use, you need a
dedicated commitlog disk, several drives in RAID0 or RAID10 for data,
a lot more RAM, and much larger memtable flush size.


b

On Mon, Jun 14, 2010 at 6:13 PM, Caribbean410 <caribbean...@gmail.com>
wrote:
> Hi,
>
> I wrote 200k records to db with each record 5MB. Get this error when I
uses
> 3 threads (each thread tries to read 200k record totally, 100 records a
> time) to read data from db. The write is OK, the error comes from read.
> Right now the Xmx of JVM is 1GB. I changed it to 2GB, still not working.
If
> the record size is under 4K, I will not get this error. Any clues to avoid
> this error?
>
> Thx
>

Reply via email to