On Mar 21, Jonathan E. Paton said:

>> I have a one big text file and also I have some
>> set of strings to be replaced by another set of
>> strings. Currently, I am reading each line of
>> the file, and replacing one set of strings by
>> another set of strings, one after another. Is
>> there any efficient way of doing this?  The data
>> is so huge that this job takes around 7Hrs of
>> total time on Sun Ultra 10.
>
>my $word = "WORD";
>
>while (<>) {
>    s/$word/lie/g; #Slow
>}
>
>is slow because that $word forces the regex to
>be recompiled each time through.  The best way
>to solve this problem is to create a Perl
>script on the fly and run using eval $script;

Not true about that regex.  Here is a comparison:

  for $w (@words) {
    for (@lines) {
      if (/$w/) { ... }
    }
  }

vs.

  for (@lines) {
    for $w (@words) {
      if (/$w/) { ... }
    }
  }

The second can be much slower than the first.

-- 
Jeff "japhy" Pinyan      [EMAIL PROTECTED]      http://www.pobox.com/~japhy/
RPI Acacia brother #734   http://www.perlmonks.org/   http://www.cpan.org/
** Look for "Regular Expressions in Perl" published by Manning, in 2002 **
<stu> what does y/// stand for?  <tenderpuss> why, yansliterate of course.
[  I'm looking for programming work.  If you like my work, let me know.  ]


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to