Replies inline (I really would recommend reading the whole ZFS Best
Practices guide a few times - many of your questions are answered in
that document):

On Fri, Mar 20, 2009 at 3:15 PM, Harry Putnam <rea...@newsguy.com> wrote:

>
> I didn't make it clear.  1 disk, the one with rpool on it is 60gb.
> The other 3 are 500GB.  Using a 500gb to mirror a 60gb would be
> something of a waste .. eh?
In the near-term, yes, but it would work.

>
> And is a mirror of rpool really important?  I assumed there would be
> some way to backup rpool in a format that could be written onto a new
> disk and booted in the event of rpool disk failure.  With the backup
> not kept on rpool disk.
You say you want a storage server you can forget about - that sounds
like zfs self-healing, which requires a mirror at least.

>
> Someone had suggested even creating an rpool mirror and putting the
> bootmanager bits on its mbr but then keeping it in storage instead of
> on the machine (freeing a controller port).
This is not a replacement for live redundancy.

>
> It has the downside of having to be re mirrored every now and then.
Actually, the point of a backup is to have a known-good copy of data
somewhere.  Re-mirroring would be a mistake, as it destroys your old
data state.

>
> But could be safe enough if nothing of great importance was kept on
> the rpool... Just OS and config changes, some BE's.  But nothing that
> couldn't be lost.
>
> Then in the event of disk failure... You'd have to just install the
> spare, boot it, and bring it up to date.
>
> Something kind of like what people do with ghost images on windows
> OS.
>
>> Start with two mirrored pools of two disks each. In the future,
>> you will be able to add two or more disks to your non-root pool.
>> You can't do that with a RAIDZ pool.
>
> Well one thing there... if I use 5 500gb disks (no counting rpool disk
> - 6 total), by the time my raidz fills up, I'll need a whole new
> machine really since I'll be out of controller ports and its getting
> hard to find controllers that are not PCI express already. (My
> hardware is plain PCI only and even then the onboard sata is not
> recognized and I'm addding a PCI sata controller already)
If the hardware is old/partially supported/flaky, all the more reason
to use mirrors.  Any single disk from a mirror can be used standalone.
 Big disks are cheap: http://tinyurl.com/5tzguf

>
> Also some of the older data will have outlived its usefulness so what
> needs transferring to a new setup may not be really hard to
> accommodate or insurmountable.
>
> And finally, I'm 65 yrs old... Its kind of hard to imagine my wife and
> I filling up the nearly 2tb of spc the above mentioned raidz1 would
> afford before we go before the old grim reaper.
>
> Even with lots of pictures and video projects thrown in.
>
> I'm really thinking now to go to 5 500gb disks in raidz1, and one
> hotswap (Plus the rpool on 1 60gb disk). I would be clear out of both
> sata and IDE controller ports then, so I'm hoping I can add a hot swap
> by pulling one of the raidz disks long enough to add the
> hotswap... then take it back out and replace the missing raidz disk.
See the zfs docs for more about hot spares.  The 'hot' part means the
disk is in the chassis and spinning all the time, ready to replace a
failed drive automatically.  Not something easy work out the hardware
for in a situation like yours.  If you don't have room for yet another
disk in the chassis, you won't be able to use a hot spare.

>
> I could do this by getting 3 more 500gb disks 2 more for the raidz and
> 1 for hotswap.  No other hardware would be needed. All the while
> assuming I can mix 3 500GB IDE and 2 500GB SATA with no problems.
>
>> If you need to, you can even detach one side of the mirror
>> of each pool. You can't do that with a RAIDZ pool. If you need
>> larger pools you can replace all the disks in both pools with
>> larger disks. You can do that with a RAIDZ pool, but more
>> flexibility exists with mirrored pools.
>>
>> 1. Yes, sensible.
>> 2. Saving space isn't always the best configuration.
>> 3. I don't know.
>> 4. Yes, with more disks, you can identify hot spares to
>> be used in the case of a disk failure.
>
> Nice thanks (To Bob F as well).  And I'm not being hard headed about
> using a mirror config.  Its just that I have limited controller ports
> (4 ide 2 sata), limited budget, and kind of wanted to get this backup
> machine setup to where I could basically just leave it alone and let
> the backups run.
>
> On 3) Mixing IDE and SATA on same zpool
> I'd really like to hear from someone who has done that.
In my experience, zfs doesn't care what kind of block device you give it.

>
> About 4).. so if all controllers are already full with either a zpool
> or rpool.  Do you pull out one of the raidz1 disks to add a hotswap
> then remove the hotswap and put the pulled disk from the raidz back?
>
> If so, does that cause some kind of resilvering or does some other
> thing happen when a machine is booted with a raidz1 disk misssing, and
> then rebooted with it back in place?
If you pull a disk from a raidz1 array, you get DEGRADED status in
your zpool status output.  This means that you take a performance it,
and that if you lose another drive, you lose the whole pool.

This is why most storage admins only use raidz in the raidz2 config
with a few hot spares ready to go.  That means a chassis that supports
7+ drives, which means a big power supply, etc.


>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to