Hi Robbit,

I think it's running out of disk space, please verify that (on Linux: df -h
).

Best regards,

Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl

Disclaimer: The information contained in this message and attachments is
intended solely for the attention and use of the named addressee and may be
confidential. If you are not the intended recipient, you are reminded that
the information remains the property of the sender. You must not use,
disclose, distribute, copy, print or rely on this e-mail. If you have
received this message in error, please contact the sender immediately and
irrevocably delete this message and any copies.



2012/9/14 rohit reddy <rohit.kommare...@gmail.com>

> Hi,
>
> I'm facing a problem in Cassandra cluster deployed on EC2 where the node
> is going down under write load.
>
> I have configured a cluster of 4 Large EC2 nodes with RF of 2.
> All nodes are instance storage backed. DISK is RAID0 with 800GB
>
> I'm pumping in write requests at about 4000 writes/sec. One of the node
> went down under this load. The total data size in each node was not more
> than 7GB
> Got the following WARN messages in the LOG file...
>
> 1. setting live ratio to minimum of 1.0 instead of 0.9003153296009601
> 2. Heap is 0.7515559786053904 full.  You may need to reduce memtable
> and/or cache sizes.  Cassandra will now flush up to the two largest
> memtables to free up memory.  Adjust flush_largest_memtables_at threshold
> in cassandra.yaml if you don't want Cassandra to do
> this automatically
> 3. WARN [CompactionExecutor:570] 2012-09-14 11:45:12,024
> CompactionTask.java (line 84) insufficient space to compact all requested
> files
>
> All cassandra settings are default settings.
> Do i need to tune anything to support this write rate?
>
> Thanks
> Rohit
>
>

Reply via email to