Jerry Preston wrote:
> Hi!,
>
> I am look for a better way and a faster way to deal with a 4 - 8 meg data
> file. This file has been saved as an .cvs file for excel to read in.
>
> All I am interested in is the first three cells of ',' delimited data.
>
> Die,Row 0, Column 11
> Test Result,
> I am look for a better way and a faster way to deal with a 4 - 8 meg
> data file. This file has been saved as an .cvs file for excel to
> read in.
>
As others pointed out it's always better to process the file as you
read it instead of slurping it whole into memory and processing the
array
On Friday, October 4, 2002, at 09:21 AM, Jerry Preston wrote:
> Hi!,
Howdy.
> I am look for a better way and a faster way to deal with a 4 - 8 meg
> data
> file. This file has been saved as an .cvs file for excel to read in.
A "better" way, is pretty open to interpretation, so here's my
Jerry Preston wrote:
>
> Hi!,
Hello,
> I am look for a better way and a faster way to deal with a 4 - 8 meg data
> file. This file has been saved as an .cvs file for excel to read in.
>
> [snip]
>
> open( FI, $file_path ) || die "unable to open $file_path $!\n";
> @file_data = ;
> c
Hi!,
I am look for a better way and a faster way to deal with a 4 - 8 meg data
file. This file has been saved as an .cvs file for excel to read in.
All I am interested in is the first three cells of ',' delimited data.
Die,Row 0, Column 11
Test Result,1
Score,1
PMark Score,0
k Score,0
Scor