Jim Gibson <jimsgib...@gmail.com> writes:

> On Jun 13, 2013, at 1:30 PM, lee wrote:
>
>> In my application, my estimate is that there will be a set of around
>> 100--150 files.  Once a file is closed and reported one last time, it
>> doesn't need to be considered anymore, so the number of relevant files
>> is limited.  Each file is only about 2kB in size.
> [...]
>
> If you have only 150 files, each 2kB in size, you can just make a copy
> of each file in another folder. Then, use the File::Compare module to
> compare the copy with the updated files in the original folder. No
> persistent data needed, no hashing, no looking at dates or sizes,
> guaranteed to find differences.

Yes, I've been thinking about doing it that way.  Detecting changes
would be guaranteed, and it would limit the versatility of the script
and create quite an overhead in duplicate files.

OTOH, what are the chances that when a file has changed, size /and/
mtime /and/ SHA-256 hash remain the same?  I'd probably have to deal
with many billions of files, or something would have to go really wrong,
before this becomes a significant issue ...

There's no way to get around persistent data for this.  To be able to
detect changes, I would have to keep a copy of each file until the file
is closed.


-- 
"Object-oriented programming languages aren't completely convinced that
you should be allowed to do anything with functions."
http://www.joelonsoftware.com/items/2006/08/01.html

-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/


Reply via email to