I think it will be in the next.next (10.6) OSX, we just need to get apple to
stop playing with their silly cell phone (that I cant help but want, damn
them!).

I have similar situation at home, but what I do is use Solaris 10 on a
cheapish x86 box with 6 400gb IDE/SATA disks, I then make them into ISCSI
targets and use that free GlobalSAN initiator ([EMAIL PROTECTED]).  I once was 
like
you, had 5 USB/Firewire drives hanging off everything and eventually I just
got fed up with the mess of cables and wall warts.

Perhaps my method of putting redundant and fast storage isn't as easy to
achieve to everyone else.  If you want more details about my setup, just
email me directly, I don't mind :)

-Andy



On 5/7/07 4:48 PM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
wrote:

> Lee,
> 
> Yes, the hot spare (disk4) should kick if another disk in the pool fails
> and yes, the data is moved to disk4.
> 
> You are correct:
> 
> 160 GB (the smallest disk) * 3 + raidz parity info
> 
> Here's the size of raidz pool comprised of 3 136-GB disks:
> 
> # zpool list
> NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
> pool                    408G     98K    408G     0%  ONLINE     -
> # zfs list
> NAME                   USED  AVAIL  REFER  MOUNTPOINT
> pool                  89.9K   267G  32.6K  /pool
> 
> The pool is 408GB in size but usable space in the pool is 267GB.
> 
> If you added the 600GB disk to the pool, then you'll still lose out
> on the extra capacity because of the smaller disks, which is why
> I suggested using it as a spare.
> 
> Regarding this:
> 
> If I didn't need a hot spare, but instead could live with running out
> and buying a new drive to add on as soon as one fails, what
> configuration would I use then?
> 
> I don't have any add'l ideas but I still recommend going with a spare.
> 
> Cindy
> 
> 
> 
> 
> 
> Lee Fyock wrote:
>> Cindy,
>> 
>> Thanks so much for the response -- this is the first one that I consider
>> an actual answer. :-)
>> 
>> I'm still unclear on exactly what I end up with. I apologize in advance
>> for my ignorance -- the ZFS admin guide assumes knowledge that I don't
>> yet have.
>> 
>> I assume that disk4 is a hot spare, so if one of the other disks die,
>> it'll kick into active use. Is data immediately replicated from the
>> other surviving disks to disk4?
>> 
>> What usable capacity do I end up with? 160 GB (the smallest disk) * 3?
>> Or less, because raidz has parity overhead? Or more, because that
>> overhead can be stored on the larger disks?
>> 
>> If I didn't need a hot spare, but instead could live with running out
>> and buying a new drive to add on as soon as one fails, what
>> configuration would I use then?
>> 
>> Thanks!
>> Lee
>> 
>> On May 7, 2007, at 2:44 PM, [EMAIL PROTECTED]
>> <mailto:[EMAIL PROTECTED]> wrote:
>> 
>>> Hi Lee,
>>> 
>>> 
>>> You can decide whether you want to use ZFS for a root file system now.
>>> 
>>> You can find this info here:
>>> 
>>> 
>>> http://opensolaris.org/os/community/zfs/boot/
>>> 
>>> 
>>> Consider this setup for your other disks, which are:
>>> 
>>> 
>>> 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive
>>> 
>>> 
>>> 250GB = disk1
>>> 
>>> 200GB = disk2
>>> 
>>> 160GB = disk3
>>> 
>>> 600GB = disk4 (spare)
>>> 
>>> 
>>> I include a spare in this setup because you want to be protected from
>>> a disk failure. Since the replacement disk must be equal to or larger than
>>> 
>>> the disk to replace, I think this is best (safest) solution.
>>> 
>>> 
>>> zpool create pool raidz disk1 disk2 disk3 spare disk4
>>> 
>>> 
>>> This setup provides less capacity but better safety, which is probably
>>> 
>>> important for older disks. Because of the spare disk requirement (must
>>> 
>>> be equal to or larger in size), I don't see a better arrangement. I
>>> 
>>> hope someone else can provide one.
>>> 
>>> 
>>> Your questions remind me that I need to provide add'l information about
>>> 
>>> the current ZFS spare feature...
>>> 
>>> 
>>> Thanks,
>>> 
>>> 
>>> Cindy
>>> 
>>> 
>> 
>> 
>> ------------------------------------------------------------------------
>> 
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



-- 


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to