I am using 1.2.3, used default heap - 2 GB without JNA installed,
then modified heap to 4 GB / 400 MB young generation. + JNA installed.
bloom filter on the CF's is lowered (more false positives, less disk space).

 WARN [ScheduledTasks:1] 2013-04-11 11:09:41,899 GCInspector.java (line
142) Heap is 0.9885574036095974 full.  You may need to reduce memtable
and/or cache sizes.  Cassandra will now flush up to the two largest
memtables to free up memory.  Adjust flush_largest_memtables_at threshold
in cassandra.yaml if you don't want Cassandra to do this automatically
 WARN [ScheduledTasks:1] 2013-04-11 11:09:41,906 StorageService.java (line
3541) Flushing CFS(Keyspace='CRAWLER', ColumnFamily='counters') to relieve
memory pressure
 INFO [ScheduledTasks:1] 2013-04-11 11:09:41,949 ColumnFamilyStore.java
(line 637) Enqueuing flush of Memtable-counters@862481781(711504/6211531
serialized/live bytes, 11810 ops)
ERROR [Thrift:641] 2013-04-11 11:25:19,563 CassandraDaemon.java (line 164)
Exception in thread Thread[Thrift:641,5,main]
java.lang.OutOfMemoryError: *Java heap space*



On Thu, Apr 11, 2013 at 11:26 PM, aaron morton <aa...@thelastpickle.com>wrote:

> > The data will be huge, I am estimating 4-6 TB per server. I know this is
> best, but those are my resources.
> You will have a very unhappy time.
>
> The general rule of thumb / guideline for a HDD based system with 1G
> networking is 300GB to 500Gb per node. See previous discussions on this
> topic for reasons.
>
> > ERROR [Thrift:641] 2013-04-11 11:25:19,563 CassandraDaemon.java (line
> 164) Exception in thread Thread[Thrift:641,5,main]
> > ...
> >  INFO [StorageServiceShutdownHook] 2013-04-11 11:25:39,915
> ThriftServer.java (line 116) Stop listening to thrift clients
> What was the error ?
>
> What version are you using?
> If you have changed any defaults for memory in cassandra-env.sh or
> cassandra.yaml revert them. Generally C* will do the right thing and not
> OOM, unless you are trying to store a lot of data on a node that does not
> have enough memory. See this thread for background
> http://www.mail-archive.com/user@cassandra.apache.org/msg25762.html
>
> Cheers
>
> -----------------
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 12/04/2013, at 7:35 AM, Nikolay Mihaylov <n...@nmmm.nu> wrote:
>
> > For one project I will need to run cassandra on following dedicated
> servers:
> >
> > Single CPU XEON 4 cores no hyper-threading, 8 GB RAM, 12 TB locally
> attached HDD's in some kind of RAID, visible as single HDD.
> >
> > I can do cluster of 20-30 such servers, may be even more.
> >
> > The data will be huge, I am estimating 4-6 TB per server. I know this is
> best, but those are my resources.
> >
> > Currently I am testing with one of such servers, except HDD is 300 GB.
> Every 15-20 hours, I get out of heap memory, e.g. something like:
> >
> > ERROR [Thrift:641] 2013-04-11 11:25:19,563 CassandraDaemon.java (line
> 164) Exception in thread Thread[Thrift:641,5,main]
> > ...
> >  INFO [StorageServiceShutdownHook] 2013-04-11 11:25:39,915
> ThriftServer.java (line 116) Stop listening to thrift clients
> >  INFO [StorageServiceShutdownHook] 2013-04-11 11:25:39,943 Gossiper.java
> (line 1077) Announcing shutdown
> >  INFO [StorageServiceShutdownHook] 2013-04-11 11:26:08,613
> MessagingService.java (line 682) Waiting for messaging service to quiesce
> >  INFO [ACCEPT-/208.94.232.37] 2013-04-11 11:26:08,655
> MessagingService.java (line 888) MessagingService shutting down server
> thread.
> > ERROR [Thrift:721] 2013-04-11 11:26:37,709 CustomTThreadPoolServer.java
> (line 217) Error occurred during processing of message.
> > java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has
> shut down
> >
> > Anyone have some advices about better utilization of such servers?
> >
> > Nick.
>
>

Reply via email to