On Tue, Sep 27, 2011 at 1:21 PM, Matt Banks <mattba...@gmail.com> wrote:

> Also, maybe I read it wrong, but why is it that (in the previous thread about
> hw raid and zpools) zpools with large numbers of physical drives (eg 20+)
> were frowned upon? I know that ZFS!=WAFL but it's so common in the
> NetApp world that I was surprised to read that. A 20 drive RAID-Z2 pool
> really wouldn't/couldn't recover (resilver) from a drive failure? That seems
> to fly in the face of the x4500 boxes from a few years ago.

    There is a world of difference between a zpool with 20+ drives and
a single vdev with 20+ drives. What has been frowned upon is a single
vdev with more than about 8 drives. I have a zpool with 120 drives, 22
vdevs each with 5 drives in a raidz2 and 10 hot spares. The only
failures I had to resilver were before it went production (and I  had
little data in it at the time), but I expect resilver times to be
reasonable based on experience with other configurations I have had.

    Keep in mind that random read I/O is proportional to the number of
vdevs, NOT the number of drives. See
https://docs.google.com/spreadsheet/pub?hl=en_US&hl=en_US&key=0AtReWsGW-SB1dFB1cmw0QWNNd0RkR1ZnN0JEb2RsLXc&output=html
for the results of some of my testing.

-- 
{--------1---------2---------3---------4---------5---------6---------7---------}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to