Erik Trimble wrote:
> As always, the devil is in the details. In this case,
> the primary 
> problem I'm having is maintaining two different block
> mapping schemes 
> (one for the old disk layout, and one for the new
> disk layout) and still 
> being able to interrupt the expansion process.  My
> primary problem is 
> that I have to keep both schemes in memory during the
> migration, and if 
> something should happen (i.e. reboot, panic, etc)
> then I lose the 
> current state of the zpool, and everything goes to
> hell in a handbasket.

It might not be that bad, if only zfs would allow mirroring a raidz pool.  Back 
when I did storage admin for a smaller company where availability was 
hyper-critical (but we couldn't afford EMC/Veritas), we had a hardware RAID5 
array.  After a few years of service, we ran into some problems:
* Need to restripe the array?  Screwed.
* Need to replace the array because current one is EOL?  Screwed.
* Array controller barfed for whatever reason?  Screwed.
* Need to flash the controller with latest firmware?  Screwed.
* Need to replace a component on the array, e.g. NIC, controller or power 
supply?  Screwed.
* Need to relocate the array?  Screwed.

If we could stomach downtime or short-lived storage solutions, none of this 
would have mattered.

To get around this, we took two hardware RAID arrays and mirrored them in 
software.  We could 
offline/restripe/replace/upgrade/relocate/whatever-we-wanted to an individual 
array since it was only a mirror which we could offline/online or detach/attach.

I suspect this could be simulated today with setting up a mirrored pool on top 
of a zvol of a raidz pool.  That involves a lot of overhead, doing 
parity/checksum calculations multiple times for the same data.  On the plus 
side, setting this up might make it possible to defrag a pool.

Should zfs simply allow mirroring one pool with another, then with a few spare 
disks laying around, altering the geometry of an existing pool could be done 
with zero downtime using steps similar to the following.
1. Create spare_pool as large as current_pool using spare disks
2. Attach spare_pool to current_pool
3. Wait for resilver to complete
4. Detach and destroy current_pool
5. Create new_pool the way you want it now
6. Attach new_pool to spare_pool
7. Wait for resilver to complete
8. Detach/destroy spare_pool
9. Chuckle at the fact that you completely remade your production pool while 
fully available

I did this dance several times over the course of many years back in the 
Disksuite days.

Thoughts?

Marty
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to