On Fri, Mar 15, 2013 at 06:09:34PM -0700, Marion Hakanson wrote:
> Greetings,
> 
> Has anyone out there built a 1-petabyte pool?  I've been asked to look
> into this, and was told "low performance" is fine, workload is likely
> to be write-once, read-occasionally, archive storage of gene sequencing
> data.  Probably a single 10Gbit NIC for connectivity is sufficient.
> 
> We've had decent success with the 45-slot, 4U SuperMicro SAS disk chassis,
> using 4TB "nearline SAS" drives, giving over 100TB usable space (raidz3).
> Back-of-the-envelope might suggest stacking up eight to ten of those,
> depending if you want a "raw marketing petabyte", or a proper "power-of-two
> usable petabyte".
> 
> I get a little nervous at the thought of hooking all that up to a single
> server, and am a little vague on how much RAM would be advisable, other
> than "as much as will fit" (:-).  Then again, I've been waiting for
> something like pNFS/NFSv4.1 to be usable for gluing together multiple
> NFS servers into a single global namespace, without any sign of that
> happening anytime soon.
> 
> So, has anyone done this?  Or come close to it?  Thoughts, even if you
> haven't done it yourself?
> 
> Thanks and regards,
> 
> Marion

We've come close:

admin@mes-str-imgnx-p1:~$ zpool list
NAME       SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
datapool   978T   298T   680T    30%  1.00x  ONLINE  -
syspool    278G   104G   174G    37%  1.00x  ONLINE  -

Using a Dell R720 head unit, plus a bunch of Dell MD1200 JBODs dual
pathed to a couple of LSI SAS switches.

Using Nexenta but no reason you couldn't do this w/ $whatever.

We did triple parity and our vdev membership is set up such that we can
lose up to three JBODs and still be functional (one vdev member disk
per JBOD).

This is with 3TB NL-SAS drives.

Ray
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to