On Thu, May 27, 1999 at 07:15:56AM -0700, Don Lewis wrote:
> } 
> } The problem seems to be that with successive updates that slightly change 
> the 
> } size of files, or add or delete files, that a large number of unallocated 
> } fragments are created.

Long ago, back when disks were small, slow and expensive, someone
wrote a program that properly defragged a Unix filesystem.  It was
slow and clunky (run time was measured in days) but it DID work.
I don't appear to have it handy anymore but you might try checking
ancient comp.sources archives.  Sorry, it's been too long to remember
the program name.

Back in another lifetime, I actually used the above mentioned
program to maintain an active filesystem with a similar fragmentation
problem.  Eventually, we determined that it was much faster to run
a script that locked a directory, rewrote the entire directory with
tar, then removed the old files and directory.  If you have periods
where pieces of your filesystem is quiescent and script carefully
(we actually compared checksums of every file), this should let
you limp along until you come up with a better solution.

/\/\ \/\/


To Unsubscribe: send mail to majord...@freebsd.org
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to