On 08/01/06 12:40, Rick C. Petty wrote:
On Tue, Aug 01, 2006 at 12:27:54PM -0500, Eric Anderson wrote:
Wouldn't this be incorrect for files that are really full of zeros? It would turn them in to sparse files when they shouldn't be, correct? Is that what happens with other tools?

Why is this bad?  I'm not suggesting that the default implementation
should change but that this "-s" option be added to properly optimize
for sparse files.  A sparse file is one which contains blocks of pure
zeros.  My example wouldn't vary in how the destination file is read
but in how many blocks are allocated in the underlying file system.

It could possibly be bad if you have a real file (say a 10GB file, partially filled with zeros - a disk image created with dd for instance), and you use cp with something like -spR to recursively copy all files. Your destination disk image would then be a sparse file, so additional changes and modifications to the file (block allocations) would fragment the image and make it slower. That would be unexpected and probably go unnoticed for some time. I do see the usefulness in this, but I think one needs to be careful to either clearly note in the man page that it will make sparse files from *any* file containing a string of zeros larger than the block size, or it needs to 'do the right thing' and determine if it's sparse or not.

Eric

--
------------------------------------------------------------------------
Eric Anderson        Sr. Systems Administrator        Centaur Technology
Anything that works is better than anything that doesn't.
------------------------------------------------------------------------
_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to