>>>>> "mm" == Michael McKnight <michael_mcknigh...@yahoo.com> writes:
mm> as far as I know, tar, cpio, etc. don't capture ACL's and mm> other low-level filesystem attributes. Take another look with whatever specific ACL's you're using. Some of the cpio formats will probably work because I think there was a thread in here about ACL copy working in cpio but not pax? You have to try it. mm> Plus, they are all susceptible to corruption while in storage, yes, of course there are no magic beans. mm> making recovery no more likely than with a zfs send. nonsense. With 'zfs send' recovery is impossible with any corruption. With tar/cpio, partial recovery is the rule, not the exception. This is a difference. a big one. And I am repeating myself, over and over. I am baffled as to why this is so disputable. mm> The checksumming capability is a key factor to me. Follow the thread. cpio does checksumming, at least with some of the stream formats, and I showed an example of how to check that the checksums are working, and prove they are missing from tar. mm> I would rather not be able to restore the data than to mm> unknowingly restore bad data. I suppose that makes sense, but only for certain really specific kinds of data that most peopple don't have. Of course being warned would be nice, but I've rarely wanted to be warned by losing everything, even files far away from the bit flip. I'd rather not be warned than get that kind of warning, most of the time. especially for a backup. OTOH if you're hauling the data from one place to another and throwing away the DVDR when you get it there, then maybe zfs send is appropriate. In that case you are not archiving the zfs send stream, but rather the expanded zpool in the remote location, which is how it's meant to be used. mm> it would be nicer if "zfs recv" would flag individual files mm> with checksum problems rather than completely failing the mm> restore. It would be nice, but I suspect it's hard to do this and preserve the incremental dump feature. There are too many lazy panics as is without wishing for incrementals to roll forward from a corrupt base. Also, I think, architecturally, replication and storage should not be mixed because the goals when errors occur are so different. Fixing this problem at the cost of making replication jobs less reliable would be a bad thing, so I like separate tools, and unstorable zfs send. mm> What I need is a complete snapshot of the filesystem mm> (ie. ufsdump) and, correct me if I'm wrong, but zfs send/recv mm> is the closest (only) thing we have. Using 'zfs send | zfs recv' to replicate one zpool into another zpool is a second option---store the destination pool on DVDR, not the stream. If you have enough space to store disk images of the second zpool, which it sounds like you do, then once you get 'split' working you can split it up and write it to DVDR, too. Or you can let ZFS do the splitting, and make DVD-size vdev's, export the pool, and burn them. It's not as robust as a split cpio when faced with a lost DVD, but it's worlds better than a split 'zfs send'. for your 'split' problem, I know I have used 'split' in the way you want, but I would have been using GNU split. Bob suggested beware of split's line-orientedness (be sure to use -b). A couple other people suggested using bash's {a..z} syntax rather than plain globbing to make sure you're combining the pieces in the right order. There is /usr/gnu/bin/split and /usr/5bin/split on my system in addition to /usr/bin/split so you've a couple others to try. You're checking that it's working the right way, with md5sum, so at least you already have enough tools to narrow the problem away from ZFS. If you get really desperate, you can use dd's skip= and count= options to emulate split, and still use cat to combine. Also check the filesizes. If you have a 2GB filesize ulimit set, that could mess up the stdout redirection, but on my Solaris system it seems to default to unlimited.
pgpnxdaoxIHkL.pgp
Description: PGP signature
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss