Nitin Kalra wrote:
> 
> In a Perl script of mine I have to compare 2 8M-10M
> files(each). Which mean 80-90M searches. As a normal
> procedure (upto 1 M)I use hashes, but going beyond 1M
> system performance degrades drastically.
> 
> If anybody has faced something like this then plz
> share the same.

You could look at the File::Compare module here

  http://search.cpan.org/~rgarcia/perl-5.10.0/lib/File/Compare.pm

But you should consider your data first. If you simply need to know whether a
two files are identical or not you should consider calculating a checksum. If
you need to know /where/ files are different then it would speed things up
dramatically if they were sorted.

HTH,

Rob

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/


Reply via email to