This doesn't really answer your question directly but could probably
help anyone planning a UFS->ZFS migration...

I conducted a UFS/SVM -> ZFS over the weekend for a file/build server
used by about 30 developers which had roughly 180GB allocated in a
~400GB UFS/SVM partition.  The server is a v40z with 5 internal and 12
external 72GB 10krpm drives.

We had planned in advance for ZFS when we bought and set up this server,
so a bunch of the disks were reserved for later ZFS use.  
I had a pool set up and migrated myself about two months ago (during
which time I shook out a few bugs in ZFS..).

After shutting down the system and remounting the SVM device read-only,
I wound up running multiple instances of "rsync" in parallel (one per
developer subdir) to move the bits.  I chose rsync because if the
transfer were to have been interrupted, I could restart it from where it
left off without wasting most of the work already done. 
(Preserving/converting acls wasn't a consideration). 

However, be aware that rsync does a complete tree walk of the source
before it starts moving any bits (but then, so does ufsdump..)

The source partition was on 6 72GB drives configured as an svm 
concatenation of two three-disk-wide stripes (why?  it started life as a
3-disk stripe and then grew...)

The destination was a pool consisting of a single 5-disk raid-z group.

six instances of rsync seemed to saturate the source -- it appeared from
watching iostat that the main limiting factor was the ability of the
first stripe group to perform random i/o -- the first three disks of the
source saturated at around 160 io's/second (which is pretty much what
you'd expect for a 10krpm drive).  It took around 8-9 hours to move all
the files.

After migration, the 180GB as seen by UFS ended up occupying around
120GB in the pool  (after compression and a 4:5 raid-z expansion).  

ZFS cites a compression ratio of around 2.10x (which was in line with
what I expected based on the early trials I conducted).  Based strictly
on the raid-z and compression ratios I would have predicted slightly
lower usage in the pool, but I'm not complaining.

After the migration I did a final ufsdump backup of the read-only UFS
source file system to a remote file; ufsdump took around the same amount
of time as the parallel rsyncs.

After a day spent listening for screams, I then unmounted it,
metaclear'ed it, and then added another two 5-disk raid-z groups to the
pool based on the free disks available.

Since then I've been collecting hourly data on the allocation
load-balacing to see how long it will be before allocation balances out
across the three groups..

                                        - Bill

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to