On Tue, Aug 01, 2006 at 12:51:32PM -0500, Eric Anderson wrote:
> On 08/01/06 12:40, Rick C. Petty wrote:
> 
> It could possibly be bad if you have a real file (say a 10GB file, 
> partially filled with zeros - a disk image created with dd for 
> instance), and you use cp with something like -spR to recursively copy 
> all files.  Your destination disk image would then be a sparse file, so 
> additional changes and modifications to the file (block allocations) 
> would fragment the image and make it slower.  That would be unexpected 
> and probably go unnoticed for some time.  I do see the usefulness in 
> this, but I think one needs to be careful to either clearly note in the 
> man page that it will make sparse files from *any* file containing a 
> string of zeros larger than the block size,

I completely agree.  This is why it would not be the default behavior.
Also, gnutar provides this option and it just describes it as "handle
sparse files efficiently"-- so a rewording of that to say that by
"efficient" we mean that the program will allocate as few blocks as
possible in the underlying filesystem.  I think a clear description in
the man page would suffice.

> or it needs to 'do the right 
> thing' and determine if it's sparse or not.

Unfortunately that would require knowledge which the POSIX calls have no
knowledge of.  I don't believe we should roll filesystem-specific knowledge
into cp.  However, there is utility in performing a "space-saving" copy,
just as a hardlink is quite space-saving.  =)

"-s" or "-S" could refer to sparse, space, squeeze, shrink, or whatever.

-- Rick C. Petty
_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to