Gerard Beekmans wrote:
> David Fix wrote:
>> The for construct doesn't have this issue.  :)
> 
> Good to know.
> 
> Bruce, since you brought this up, can you run a test to see if the "for"
> method is faster than using find?
> 
> The way we use it, a recursive find in /tmp or a "rm -r /tmp/*" in a
> for-loop has the exact same result. Using find would allow easier
> changes down the road if we ever need to have more fine-grained control
> which files and/or directories are removed or left alone.
> 
> If there is a large performance difference in find vs. for, then I'd
> like to consider using "for" until such a need actually arises to use
> the more versatile "find" method.

Sure.  I have a script for building subversion tin /tmp.  After a fresh
build, it did:

$ time for file in /tmp/subversion/*; do if [ $file != lost+found ];
then rm -r $file; fi; done

real    0m0.981s
user    0m0.008s
sys     0m0.139s

Doing another build, I get:

$ cd /tmp/subversion
$ time  find . -xdev -mindepth 1 ! -name lost+found  -delete

real    0m0.501s
user    0m0.006s
sys     0m0.137s

Since both builds were fresh, the info for both was cached in memory.
Perhaps the whole issue is just my problem because I build in /tmp.  I
could just as easily build in /var/tmp.  A quick check show that I have
11785 files in /tmp right now.  Probably a bit more than most users.  :)

  -- Bruce
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-dev
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to