On Thu, Apr 19, 2012 at 10:07 PM, John Doe <jd...@yahoo.com> wrote:

> Franc Carter <franc.car...@sirca.org.au>
>
> > One of the projects I am working on is going to need to store about
> 200TB of data - generally in manageable binary chunks. However, after doing
> some rough calculations based on rules of thumb I have seen for how much
> storage should be on each node I'm worried.
> >  200TB with RF=3 is 600TB = 600,000GB
> >  Which is 1000 nodes at 600GB per node
> > I'm hoping I've missed something as 1000 nodes is not viable for us.
>
> Why only 600GB per node?
>

I had seen comments that you didn't want to put 'too much' data on to a
single node and had seen the figure of 400GB thrown around as an
approximate figure - I rounded up to 600GB to make the maths easy ;-)

I'm hoping that my understanding is flawed ;-)

cheers


>
> JD
>
>


-- 

*Franc Carter* | Systems architect | Sirca Ltd
 <marc.zianideferra...@sirca.org.au>

franc.car...@sirca.org.au | www.sirca.org.au

Tel: +61 2 9236 9118

Level 9, 80 Clarence St, Sydney NSW 2000

PO Box H58, Australia Square, Sydney NSW 1215

Reply via email to