Tim Johnson wrote: > There's one caveat to be aware of with doing a "@array = <FILEHANDLE>" type > of file read. If you end up having to work with very large files (or you > don't have a whole lot of RAM to work with), you could be slowing down your > program, maybe drastically if you try to load the entire file into memory > at once, which is what the code below will do. Also, you are chomp()ing the > array that has the contents of your file, but then you are reading from the > file handle line by line when you proceed to the "while(<HANDLE>){" > statement. Get rid of the @filecontents variable altogether. As a matter > of fact, even the chomp is extranneous to your mission of substituting > characters. Try this version, and see if you can tell where I've done > things a bit differently. > > ######################### > > use strict; > my $file = "/path/to/my/file.txt"; > > #open file for reading > open(FILE,"<$file") || die "Could not open file for reading! $!"; > > #open file for writing > open(TEMP,">$file.tmp") || die "Could not open file for writing! $!"; > > while(<FILE>){ > #for each line read, replace text > $_ =~ s/find/replace/gi; > #print result to temp file > print TEMP $_; > } > > #should happen automatically, but just for good measure... > close FILE || die "Could not close file! $!"; > close TEMP || die "Could not close file! $!"; > > #remove the old file > unlink $file; > > #rename the temp file to the old file's name > rename("$file.tmp",$file) || die "The file could not be renamed! $!"; > > #########################
Yeah, that does work. I like it. I will probably use that in the future for other things, but not in this particular case. (See last paragraph below) I've found that chomping the whole file works well for the other text pattern searching I do later in the program, and the program runs in under 50 seconds even though the text file has up to 400,000 lines in it. I used to have a for loop in a single perl program that would run through dozens of 400,000 line programs, but it would only do the first file fast. The 2nd, 3rd, 4th, . . . files would take 5 to 30 minutes, or more! I never did figure out why this was, but it's probably a memory problem of some kind. Now I just create a bunch of .pl files, one for each file I want to search, and run them separately. Not great, but it sure works fast. My situation does not require me to actually have a file with the replaced text. It just requires that the substitution is made in the array, so that when I write some file lines to another file, the substitutions are there. I really should only do the substitution on the lines I want to write, but I've already done the whole 400,000 line file earlier in the program, and it only adds about 2 seconds to the run time. Perl is amazingly fast. Thanks a bunch for your help. I plan to hit you guys with some other problems in the future, but I'm doing pretty good on this simple stuff so far. Mike -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]