> > Howdt list.
> > 
> > I've never had to work with really big files before( I'm used
> > to tiny ones you can slurp in at once) and now I have to 
> > process a 180MB text file line by line and was wondering the 
> > most efficient method to do so form within a script not via 
> > coommand line.
> > 
> > Would I just do
> >  open(FH ...) or die $!;
> > 
> >     while(<FH>)) { process line }
> > 
> >  close(FH);
> > 
> > Or is there a better way?
> > 
> > TIA
> > 
> > Dan
> > 
> > --
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> > 
> That's the general idea. There could be variations depending 
> on what you'retrying to accomplish in the processing of the 
> lines, or if infact you need to process line by line. 180mb 
> is not really a "Really big file", I process files in the 
> gigabytes range with excellent performace results. If your 
> considering somthing like:
> 
> while (<FH>) {
>     chomp;
>     @fields = split /$delim/;
>     foreach $field (@fields) {
>         # lots of field level processing
>     }
> }
> 
> you might consider other methods. But even this type of 
> construct works very well on files in the 180mb range. Bear 
> in mind I work on a large HP-UX box, certainly if your system 
> is a 386sx16 your mileage may vary.
> 
> HTH
> Steve

Thanks for the input steve! Good to know.

Dan

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to