Hi,

Thanks for the guideline. I did not turn up any memory setting, the
nodes are configured with all default settings (except for disk-access
is using nmap). I have 3 nodes with 1 client using hector, 8 writing
threads. There are 3 CF, 1 standard and 2 super.

Thanks,
Trung.

On Mon, Nov 22, 2010 at 11:00 AM, Aaron Morton <aa...@thelastpickle.com> wrote:
> The higher memory usage for the java process may be because of memory mapped
> file access, take a look at the disk_access_mode in cassandra.yaml
> WRT going OutOfMemory:
> - what are your Memtable thresholds in cassandra.yaml ?
> - how many Column Families do you have?
> - What are your row and key cache settings?
> - Have a read of JVM HeapSize section
> here http://wiki.apache.org/cassandra/MemtableThresholds
> - Have a read
> of http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts
> In short, if you've turned up any memory settings turn them down. Run your
> test again and see if it completes. Then turn them up a little at a time.
> If you're still having trouble include some details of your cassandra.yaml
> file and the schema definition next time. As well as how many cassandra
> nodes you have, how many clients you are running against it and how fast
> they are writing.
> Aaron
>
> On 23 Nov, 2010,at 07:45 AM, Trung Tran <tran.hieutr...@gmail.com> wrote:
>
> Hi,
>
> I have a test cluster of 3 nodes, 14Gb of mem in each node,
> replication factor = 3. With default -Xms and Xmx, my nodes are set to
> have max-heap-size = 7Gb. After initial load with about 200M rows
> (write with hector default consistencylevel = quorum,) my nodes memory
> usage are up to 13.5Gb, show a bunches of GC notifications and
> eventually crashes with java.lang.OutOfMemoryError: Java heap space.
>
> Is there any setting that can help with this scenario?
>
> Thanks,
> Trung.
>

Reply via email to