> OK, I had to try the two ways again to see how much difference it made. I
> created a random contents fixed field file 14500 lines long X 80 columns
> wide, and tried processing the lines (using substr($_,....)to
> break lines up into 4 sections, substitute based on a few patterns, and
change a
> couple of columns like I had given in my previous real life example) to
see
> if loading the entire file into an array made as much performance
difference as I had
> thought previously. The difference on a file that size was so small as to
> not be worth mentioning. Either way, it processed the 14,500 line file in
> less than three seconds and wrote the new contents to the new
> file. Granted, I am using a different OS than when I did that test before,
but still, the
> difference was virtually indiscernible. Therefore, I'll concede my point
> about a significant performance difference.

See, the thing is that files are (generally) buffered, so large portions of
the file are being read into memory anyway-your perl program sees it a line
at a time, but the OS doesn't.

Performance will vary depending on how files are implemented in a) that
version
of perl (not having seen the source since... well, waaay too long ago... I
don't
know how abstracted things like that are) and b) the underlying OS.

Dave

Reply via email to