Oliver Fromme wrote:
While doing some performance tuning of a backup script I noticed that the -z option of our (bsd)tar behaves in a very suboptimal way. It's not only a lot slower than using gzip separately, it also compresses worse.
It seems that you and others have seen very different performance. I'd be very interested in knowing why. I suspect it may have to do with average file size. How big are the files you're archiving? Does the relative performance differ with larger or smaller files? Right now, libarchive calls the libz compression function for each small piece of data. I think that it might be possible to make it faster by combining blocks of data to make fewer calls to the compression routines in libz. (This is why I think the size of the files might matter; small files result in more calls to libz with small blocks of data.) I am very surprised that you see different sizes of output. There are small differences between the compression code in libz and gzip, but I've only ever seen very trivial size differences because of that. Tim Kientzle _______________________________________________ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "[EMAIL PROTECTED]"