mike wrote:
> On 6/20/07, Constantin Gonzalez <[EMAIL PROTECTED]> wrote:
>
>>  One disk can be one vdev.
>>  A 1+1 mirror can be a vdev, too.
>>  A n+1 or n+2 RAID-Z (RAID-Z2) set can be a vdev too.
>>
>> - Then you concatenate vdevs to create a pool. Pools can be extended by
>>  adding more vdevs.
>>
>> - Then you create ZFS file systems that draw their block usage from the
>>  resources supplied by the pool. Very flexible.
>
> This actually brings up something I was wondering about last night:
>
> If I was to plan for a 16 disk ZFS-based system, you would probably
> suggest me to configure it as something like 5+1, 4+1, 4+1 all raid-z
> (I don't need the double parity concept)
>
> I would prefer something like 15+1 :) I want ZFS to be able to detect
> and correct errors, but I do not need to squeeze all the performance
> out of it (I'll be using it as a home storage server for my DVDs and
> other audio/video stuff. So only a few clients at the most streaming
> off of it)
>
> I would be interested in hearing if there are any other configuration
> options to squeeze the most space out of the drives. I have no issue
> with powering down to replace a bad drive, and I expect that I'll only
Just know that, if your server/disks are up all the time, shutting down
your server whilst you wait for replacement drives actually might kill
your array. Especially with consumer IDE/SATA drives.

Those pesky consumer drivers aren't made for 24/7 usage, i think they
spec em at 8hrs a day? Eitherway, that's me being sidetracked, the
problem is, you'll have a disk up spinning normally, some access, same
temperature! all the time. All of a sudden you change the envirment, you
let it cool down and what not. Harddisks don't like that at all! I've
even heard of harddisk (cases) cracking because of the temperature
differences and such.

My requirements are the same, and i want space, but the thought of
having more disks die on me while i replace the broken one doesn't
really make me happy either. (I personally use only the WD Raid editions
of HDD's; wether it's worth it or not, i dunno, but they have better
warranty and supposedly should be able to do 24/7 a day)

> have one at the most fail at a time. If I really do need room for two
> to fail then I suppose I can look for a 14 drive space usable setup
> and use raidz-2.
>
> Thanks,
> mike
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to