> Hmmm, what is the recommendation for a 10G network if 1G was 300G to
> 500GŠI am guessing I can't do 10 times that, correct?  But maybe I could
> squeak out 600G to 1T?
Best thing to do would be run a test on how long it takes to repair or 
bootstrap a node. The 300GB to 500Gb was just a guideline.

Cheers

-----------------
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 13/04/2013, at 12:02 AM, "Hiller, Dean" <dean.hil...@nrel.gov> wrote:

> Hmmm, what is the recommendation for a 10G network if 1G was 300G to
> 500GŠI am guessing I can't do 10 times that, correct?  But maybe I could
> squeak out 600G to 1T?
> 
> Thanks,
> Dean
> 
> On 4/11/13 2:26 PM, "aaron morton" <aa...@thelastpickle.com> wrote:
> 
>>> The data will be huge, I am estimating 4-6 TB per server. I know this
>>> is best, but those are my resources.
>> You will have a very unhappy time.
>> 
>> The general rule of thumb / guideline for a HDD based system with 1G
>> networking is 300GB to 500Gb per node. See previous discussions on this
>> topic for reasons.
>> 
>>> ERROR [Thrift:641] 2013-04-11 11:25:19,563 CassandraDaemon.java (line
>>> 164) Exception in thread Thread[Thrift:641,5,main]
>>> ...
>>> INFO [StorageServiceShutdownHook] 2013-04-11 11:25:39,915
>>> ThriftServer.java (line 116) Stop listening to thrift clients
>> What was the error ?
>> 
>> What version are you using?
>> If you have changed any defaults for memory in cassandra-env.sh or
>> cassandra.yaml revert them. Generally C* will do the right thing and not
>> OOM, unless you are trying to store a lot of data on a node that does not
>> have enough memory. See this thread for background
>> http://www.mail-archive.com/user@cassandra.apache.org/msg25762.html
>> 
>> Cheers
>> 
>> -----------------
>> Aaron Morton
>> Freelance Cassandra Consultant
>> New Zealand
>> 
>> @aaronmorton
>> http://www.thelastpickle.com
>> 
>> On 12/04/2013, at 7:35 AM, Nikolay Mihaylov <n...@nmmm.nu> wrote:
>> 
>>> For one project I will need to run cassandra on following dedicated
>>> servers:
>>> 
>>> Single CPU XEON 4 cores no hyper-threading, 8 GB RAM, 12 TB locally
>>> attached HDD's in some kind of RAID, visible as single HDD.
>>> 
>>> I can do cluster of 20-30 such servers, may be even more.
>>> 
>>> The data will be huge, I am estimating 4-6 TB per server. I know this
>>> is best, but those are my resources.
>>> 
>>> Currently I am testing with one of such servers, except HDD is 300 GB.
>>> Every 15-20 hours, I get out of heap memory, e.g. something like:
>>> 
>>> ERROR [Thrift:641] 2013-04-11 11:25:19,563 CassandraDaemon.java (line
>>> 164) Exception in thread Thread[Thrift:641,5,main]
>>> ...
>>> INFO [StorageServiceShutdownHook] 2013-04-11 11:25:39,915
>>> ThriftServer.java (line 116) Stop listening to thrift clients
>>> INFO [StorageServiceShutdownHook] 2013-04-11 11:25:39,943
>>> Gossiper.java (line 1077) Announcing shutdown
>>> INFO [StorageServiceShutdownHook] 2013-04-11 11:26:08,613
>>> MessagingService.java (line 682) Waiting for messaging service to quiesce
>>> INFO [ACCEPT-/208.94.232.37] 2013-04-11 11:26:08,655
>>> MessagingService.java (line 888) MessagingService shutting down server
>>> thread.
>>> ERROR [Thrift:721] 2013-04-11 11:26:37,709 CustomTThreadPoolServer.java
>>> (line 217) Error occurred during processing of message.
>>> java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has
>>> shut down
>>> 
>>> Anyone have some advices about better utilization of such servers?
>>> 
>>> Nick.
>> 
> 

Reply via email to