If you know the size of the last line you can use 'seek' to get there, but
this operates on bytes, not chars.  If the records are of a fixed size this
would be the most efficient way to do it.

use POSIX; #This gives us the SEEK_END constant
seek (FH, -$recsize, SEEK_END) or die "Could not seek: $!";
$last = <FH>;

But if the records have variable lengths you end up seeking to the end, read
a char, seek back 2, read 1, seek back 2... until you get the next record
separator.

If you have really big files this might be worth while, but you will have
more maintainable code if you forget about this type of optimization until
it's shown itself to be a problem.  I'd stick with the idiom I posted
earlier if the files are guaranteed to be small or the loop structure
proposed by smoot if the files may be large.

Peter C.

-----Original Message-----

If the file is huge I wouldn't recommend doing so.. because it puts all
the file into your array

I think you can go directly to the last line if you know exactly the
length of the last line, am I wrong on this one? Like seeking to the end
and reading backwards of n chars?

Etienne


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to