Don't do the "put a file into an array" trick if you're not VERY sure the
file is small.
It's generally concidered bad programming practice and a recipe for disaster
if you dont control the input.

having said that, i'd like to note that there's also a memory penalty for
using an array over a string.
you can view the archives of this list for a detailed explenation of this,
but i'll suffice for now by mentioning that it simply costs MORE memory to
allocate different array entries then it would just one string (which seems
quite logical when you think about it).

now, there are 3 ways to do it.

1    if you insist on slurping the file in memory, use the following:
{ local $/; my $in=<I>; my $i=1; while($in=~/\n/g){++$i} print $i }

this will simply count the amount of newlines and print the result.

2    same thing, but without slurping the file into memory,... it DOES loop
over all lines tho
while(<I>){$i++} print $i;

3    same thing again, but this time we use the built-in line counter:
while(<I>){} print $.;

just fyi,

Jos Boumans


> As sysread() doesn't use buffers, it's probably more efficient to use
normal
> reads. Also, splitting lines by yourself (i.e. in Perl) won't be as
efficient
> as in C implementation. Thus I recommend to use
>
> open(FILEH, "database/$_") or die "$!";
> my @tempfile = <FILEH>;
> close FILEH;
> my $lines = scalar @tempfile;
> print " - $lines entries\n";
>
> Yo don't need to handle zero lines in special way, I think.
>



-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to