On 2013-02-26 21:30, Morris Hooten wrote:
Besides copying data from /dev/md/dsk/x volume manager filesystems to
new zfs filesystems
does anyone know of any zfs conversion tools to make the
conversion/migration from svm to zfs
easier?

Do you mean something like a tool that would change metadata around
your userdata in-place and turn an SVM volume into a ZFS pool, like
Windows' built-in FAT -> NTFS conversion? No, there's nothing like it.

However, depending on your old system's configuration, you might have
to be careful about choice of "copy" programs. Namely, if your setup
used some ACLs (beyond standard POSIX access bits), then you'd need
ACL-aware copying tools. Sun tar and cpio are some (see manpages about
usage examples), rsync 3.0.10 was recently reported to support Solaris
ACLs as well, but I didn't test that myself. GNU tar and cpio are known
to do a poor job with intimate Solaris features, though they might be
superior for some other tasks. Basic (Sun, not GNU) cp and mv should
work correctly too.

I most often use "rsync -avPHK /src/ /dst/", especially if there are
no ACLs to think about, or the target's inheritable ACLs are acceptable
(and overriding them with original's access rights might even be wrong).

Also, before you do the migration, think ahead of the storage and IO
requirements for the datasets. For example, log files are often huge,
compress into orders of magnitude less, and the IOPS loss might be
negligible (or even boost, due to smaller hardware IOs and less seeks).
Randomly accessed (written) data might not like heavier compressions.
Databases or VM images might benefit from smaller maximum block sizes,
although often these are not made 1:1 with DB block size, but rather
balance about 4 DB entries in an FS block of 32Kb or 64Kb (from what
I saw suggested on the list).

Singly-written data, like OS images, might benefit from compression as
well. If you have local zones, you might benefit from carrying over
(or installing from scratch) one as a typical example "DUMMY" into a
dedicated dataset, then cloning it into many actual zone roots as you'd
need, and "rsync -cavPHK --delete-after" from originals into this
dataset - this way only differing files (or parts thereof) would be
transferred, giving you the benefits of cloning (space saving) without
the downsides of deduplication.

Also, for data in the zones (such as database files, tomcat/glassfish
application server roots, etc.) you might like to use separate dataset
hierarchies mounted via delegation of a "root" ZFS dataset into zones.
This way your zoneroots would live a separate life from application
data and non-packaged applications, which might simplify backups, etc.
and you might be able to store these pieces in different pools (i.e.
SSDs for some data and HDDs for other - though most list members would
rightfully argue in favor of L2ARC on the SSDs).

HTH,
//Jim Klimov

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to