On 21/10/2009, at 7:39 AM, Mike Bo wrote:
Once data resides within a pool, there should be an efficient method
of moving it from one ZFS file system to another. Think Link/Unlink
vs. Copy/Remove.
I agree with this sentiment, it's certainly a surprise when you first
notice.
Here's my scenario... When I originally created a 3TB pool, I didn't
know the best way carve up the space, so I used a single, flat ZFS
file system. Now that I'm more familiar with ZFS, managing the sub-
directories as separate file systems would have made a lot more
sense (seperate policies, snapshots, etc.). The problem is that some
of these directories contain tens of thousands of files and many
hundreds of gigabytes. Copying this much data between file systems
within the same disk pool just seems wrong.
I hope such a feature is possible and not too difficult to
implement, because I'd like to see this capability in ZFS.
It doesn't seem unreasonable. It seems like the different properties
available on the given datasets (recordsize, checksum, compression,
encryption, copies, version, utf8only, casesensitivity) would have to
match, or else fall back to blind copying?
Regards,
mikebo
--
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss