> serving a load of approximately 600GB
is that 600GB in the cluster or 600GB per node ? 
In pre 1.2 days we recommend around 300GB to 500GB per node with spinning disks 
and 1Gbe networking. It's a soft rule of thumb not a hard rule. Above that size 
repair and replacing a failed node can take a long time.
 
> Does anyone have CPU/memory/network graphs (e.g. Cacti) over the last 1-2 
> months they are willing to share of their Cassandra database nodes?
If you can share yours and any specific concerns you may have we may be able to 
help. 

Cheers

-----------------
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 24/06/2013, at 1:14 PM, G man <gmanli...@gmail.com> wrote:

> Hi All,
> 
> We are running a 1.0.9 cluster with 3 nodes (RF=3) serving a load of 
> approximately 600GB, and since I am fairly new to Cassandra, I'd like to 
> compare notes with other people running a cluster of similar size (perhaps 
> not in the amount of data, but the number of nodes).
> 
> Does anyone have CPU/memory/network graphs (e.g. Cacti) over the last 1-2 
> months they are willing to share of their Cassandra database nodes?
> 
> Just trying to compare our patterns with others to see if they are "normal".
> 
> Thanks in advance.
> G

Reply via email to